00:00:00.001 Started by upstream project "autotest-per-patch" build number 126230 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.164 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.205 Using shallow fetch with depth 1 00:00:00.205 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.205 > git --version # timeout=10 00:00:00.241 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.132 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.142 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.153 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:07.154 > git config core.sparsecheckout # timeout=10 00:00:07.164 > git read-tree -mu HEAD # timeout=10 00:00:07.180 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:07.201 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:07.202 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:07.302 [Pipeline] Start of Pipeline 00:00:07.316 [Pipeline] library 00:00:07.318 Loading library shm_lib@master 00:00:07.318 Library shm_lib@master is cached. Copying from home. 00:00:07.332 [Pipeline] node 00:00:07.339 Running on VM-host-SM17 in /var/jenkins/workspace/freebsd-vg-autotest_2 00:00:07.341 [Pipeline] { 00:00:07.349 [Pipeline] catchError 00:00:07.351 [Pipeline] { 00:00:07.362 [Pipeline] wrap 00:00:07.370 [Pipeline] { 00:00:07.375 [Pipeline] stage 00:00:07.376 [Pipeline] { (Prologue) 00:00:07.387 [Pipeline] echo 00:00:07.388 Node: VM-host-SM17 00:00:07.392 [Pipeline] cleanWs 00:00:07.398 [WS-CLEANUP] Deleting project workspace... 00:00:07.398 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.402 [WS-CLEANUP] done 00:00:07.666 [Pipeline] setCustomBuildProperty 00:00:07.771 [Pipeline] httpRequest 00:00:07.798 [Pipeline] echo 00:00:07.800 Sorcerer 10.211.164.101 is alive 00:00:07.807 [Pipeline] httpRequest 00:00:07.811 HttpMethod: GET 00:00:07.812 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.812 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.824 Response Code: HTTP/1.1 200 OK 00:00:07.824 Success: Status code 200 is in the accepted range: 200,404 00:00:07.825 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:11.040 [Pipeline] sh 00:00:11.323 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:11.340 [Pipeline] httpRequest 00:00:11.368 [Pipeline] echo 00:00:11.369 Sorcerer 10.211.164.101 is alive 00:00:11.376 [Pipeline] httpRequest 00:00:11.380 HttpMethod: GET 00:00:11.380 URL: http://10.211.164.101/packages/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:00:11.381 Sending request to url: http://10.211.164.101/packages/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:00:11.402 Response Code: HTTP/1.1 200 OK 00:00:11.402 Success: Status code 200 is in the accepted range: 200,404 00:00:11.402 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:01:00.599 [Pipeline] sh 00:01:00.877 + tar --no-same-owner -xf spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:01:04.203 [Pipeline] sh 00:01:04.479 + git -C spdk log --oneline -n5 00:01:04.480 6c0846996 module/bdev/nvme: add detach-monitor poller 00:01:04.480 70e80ba15 lib/nvme: add scan attached 00:01:04.480 455fda465 nvme_pci: ctrlr_scan_attached callback 00:01:04.480 a732bf2a5 nvme_transport: optional callback to scan attached 00:01:04.480 2728651ee accel: adjust task per ch define name 00:01:04.498 [Pipeline] writeFile 00:01:04.513 [Pipeline] sh 00:01:04.817 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.830 [Pipeline] sh 00:01:05.106 + cat autorun-spdk.conf 00:01:05.106 SPDK_TEST_UNITTEST=1 00:01:05.106 SPDK_RUN_VALGRIND=0 00:01:05.106 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.106 SPDK_TEST_NVME=1 00:01:05.106 SPDK_TEST_BLOCKDEV=1 00:01:05.106 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.112 RUN_NIGHTLY=0 00:01:05.115 [Pipeline] } 00:01:05.134 [Pipeline] // stage 00:01:05.152 [Pipeline] stage 00:01:05.154 [Pipeline] { (Run VM) 00:01:05.172 [Pipeline] sh 00:01:05.452 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:05.452 + echo 'Start stage prepare_nvme.sh' 00:01:05.452 Start stage prepare_nvme.sh 00:01:05.452 + [[ -n 7 ]] 00:01:05.452 + disk_prefix=ex7 00:01:05.452 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest_2 ]] 00:01:05.452 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf ]] 00:01:05.452 + source /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf 00:01:05.452 ++ SPDK_TEST_UNITTEST=1 00:01:05.452 ++ SPDK_RUN_VALGRIND=0 00:01:05.452 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.452 ++ SPDK_TEST_NVME=1 00:01:05.452 ++ SPDK_TEST_BLOCKDEV=1 00:01:05.452 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.452 ++ RUN_NIGHTLY=0 00:01:05.452 + cd /var/jenkins/workspace/freebsd-vg-autotest_2 00:01:05.452 + nvme_files=() 00:01:05.452 + declare -A nvme_files 00:01:05.452 + backend_dir=/var/lib/libvirt/images/backends 00:01:05.452 + nvme_files['nvme.img']=5G 00:01:05.452 + nvme_files['nvme-cmb.img']=5G 00:01:05.452 + nvme_files['nvme-multi0.img']=4G 00:01:05.452 + nvme_files['nvme-multi1.img']=4G 00:01:05.452 + nvme_files['nvme-multi2.img']=4G 00:01:05.452 + nvme_files['nvme-openstack.img']=8G 00:01:05.452 + nvme_files['nvme-zns.img']=5G 00:01:05.452 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:05.452 + (( SPDK_TEST_FTL == 1 )) 00:01:05.452 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:05.452 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:05.452 + for nvme in "${!nvme_files[@]}" 00:01:05.452 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:05.452 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.452 + for nvme in "${!nvme_files[@]}" 00:01:05.452 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:05.452 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.452 + for nvme in "${!nvme_files[@]}" 00:01:05.452 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:05.452 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:05.452 + for nvme in "${!nvme_files[@]}" 00:01:05.452 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:05.452 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.452 + for nvme in "${!nvme_files[@]}" 00:01:05.452 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:05.452 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.452 + for nvme in "${!nvme_files[@]}" 00:01:05.452 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:05.452 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.452 + for nvme in "${!nvme_files[@]}" 00:01:05.452 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:05.709 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.709 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:05.709 + echo 'End stage prepare_nvme.sh' 00:01:05.709 End stage prepare_nvme.sh 00:01:05.719 [Pipeline] sh 00:01:05.997 + DISTRO=freebsd14 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:05.997 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -H -a -v -f freebsd14 00:01:05.997 00:01:05.997 DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant 00:01:05.997 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk 00:01:05.997 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest_2 00:01:05.997 HELP=0 00:01:05.997 DRY_RUN=0 00:01:05.997 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img, 00:01:05.997 NVME_DISKS_TYPE=nvme, 00:01:05.997 NVME_AUTO_CREATE=0 00:01:05.997 NVME_DISKS_NAMESPACES=, 00:01:05.997 NVME_CMB=, 00:01:05.997 NVME_PMR=, 00:01:05.997 NVME_ZNS=, 00:01:05.997 NVME_MS=, 00:01:05.997 NVME_FDP=, 00:01:05.997 SPDK_VAGRANT_DISTRO=freebsd14 00:01:05.997 SPDK_VAGRANT_VMCPU=10 00:01:05.997 SPDK_VAGRANT_VMRAM=14336 00:01:05.997 SPDK_VAGRANT_PROVIDER=libvirt 00:01:05.997 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:05.997 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:05.997 SPDK_OPENSTACK_NETWORK=0 00:01:05.997 VAGRANT_PACKAGE_BOX=0 00:01:05.997 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:05.997 FORCE_DISTRO=true 00:01:05.997 VAGRANT_BOX_VERSION= 00:01:05.997 EXTRA_VAGRANTFILES= 00:01:05.997 NIC_MODEL=e1000 00:01:05.997 00:01:05.997 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt' 00:01:05.997 /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest_2 00:01:10.181 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.438 ==> default: Creating image (snapshot of base box volume). 00:01:10.438 ==> default: Creating domain with the following settings... 00:01:10.438 ==> default: -- Name: freebsd14-14.0-RELEASE-1718332871-2294_default_1721067302_9d23a866e9dec0d5e6c2 00:01:10.438 ==> default: -- Domain type: kvm 00:01:10.438 ==> default: -- Cpus: 10 00:01:10.438 ==> default: -- Feature: acpi 00:01:10.438 ==> default: -- Feature: apic 00:01:10.438 ==> default: -- Feature: pae 00:01:10.438 ==> default: -- Memory: 14336M 00:01:10.438 ==> default: -- Memory Backing: hugepages: 00:01:10.438 ==> default: -- Management MAC: 00:01:10.438 ==> default: -- Loader: 00:01:10.438 ==> default: -- Nvram: 00:01:10.438 ==> default: -- Base box: spdk/freebsd14 00:01:10.438 ==> default: -- Storage pool: default 00:01:10.438 ==> default: -- Image: /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1718332871-2294_default_1721067302_9d23a866e9dec0d5e6c2.img (32G) 00:01:10.438 ==> default: -- Volume Cache: default 00:01:10.438 ==> default: -- Kernel: 00:01:10.438 ==> default: -- Initrd: 00:01:10.438 ==> default: -- Graphics Type: vnc 00:01:10.438 ==> default: -- Graphics Port: -1 00:01:10.438 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.438 ==> default: -- Graphics Password: Not defined 00:01:10.438 ==> default: -- Video Type: cirrus 00:01:10.438 ==> default: -- Video VRAM: 9216 00:01:10.438 ==> default: -- Sound Type: 00:01:10.438 ==> default: -- Keymap: en-us 00:01:10.438 ==> default: -- TPM Path: 00:01:10.438 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.438 ==> default: -- Command line args: 00:01:10.438 ==> default: -> value=-device, 00:01:10.438 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:10.438 ==> default: -> value=-drive, 00:01:10.438 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:10.438 ==> default: -> value=-device, 00:01:10.438 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.696 ==> default: Creating shared folders metadata... 00:01:10.696 ==> default: Starting domain. 00:01:12.596 ==> default: Waiting for domain to get an IP address... 00:01:34.669 ==> default: Waiting for SSH to become available... 00:01:44.643 ==> default: Configuring and enabling network interfaces... 00:01:51.198 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:03.403 ==> default: Mounting SSHFS shared folder... 00:02:04.339 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output 00:02:04.339 ==> default: Checking Mount.. 00:02:05.738 ==> default: Folder Successfully Mounted! 00:02:05.738 ==> default: Running provisioner: file... 00:02:06.674 default: ~/.gitconfig => .gitconfig 00:02:07.257 00:02:07.257 SUCCESS! 00:02:07.257 00:02:07.257 cd to /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt and type "vagrant ssh" to use. 00:02:07.257 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:07.257 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt" to destroy all trace of vm. 00:02:07.257 00:02:07.266 [Pipeline] } 00:02:07.284 [Pipeline] // stage 00:02:07.294 [Pipeline] dir 00:02:07.294 Running in /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt 00:02:07.296 [Pipeline] { 00:02:07.312 [Pipeline] catchError 00:02:07.314 [Pipeline] { 00:02:07.330 [Pipeline] sh 00:02:07.610 + vagrant ssh-config --host vagrant 00:02:07.610 + sed -ne /^Host/,$p 00:02:07.610 + tee ssh_conf 00:02:11.798 Host vagrant 00:02:11.798 HostName 192.168.121.180 00:02:11.798 User vagrant 00:02:11.798 Port 22 00:02:11.798 UserKnownHostsFile /dev/null 00:02:11.798 StrictHostKeyChecking no 00:02:11.798 PasswordAuthentication no 00:02:11.798 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1718332871-2294/libvirt/freebsd14 00:02:11.798 IdentitiesOnly yes 00:02:11.798 LogLevel FATAL 00:02:11.798 ForwardAgent yes 00:02:11.798 ForwardX11 yes 00:02:11.798 00:02:11.816 [Pipeline] withEnv 00:02:11.819 [Pipeline] { 00:02:11.837 [Pipeline] sh 00:02:12.122 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:12.122 source /etc/os-release 00:02:12.122 [[ -e /image.version ]] && img=$(< /image.version) 00:02:12.122 # Minimal, systemd-like check. 00:02:12.122 if [[ -e /.dockerenv ]]; then 00:02:12.122 # Clear garbage from the node's name: 00:02:12.122 # agt-er_autotest_547-896 -> autotest_547-896 00:02:12.122 # $HOSTNAME is the actual container id 00:02:12.122 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:12.122 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:12.122 # We can assume this is a mount from a host where container is running, 00:02:12.122 # so fetch its hostname to easily identify the target swarm worker. 00:02:12.122 container="$(< /etc/hostname) ($agent)" 00:02:12.122 else 00:02:12.122 # Fallback 00:02:12.122 container=$agent 00:02:12.122 fi 00:02:12.122 fi 00:02:12.122 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:12.122 00:02:12.134 [Pipeline] } 00:02:12.157 [Pipeline] // withEnv 00:02:12.167 [Pipeline] setCustomBuildProperty 00:02:12.184 [Pipeline] stage 00:02:12.186 [Pipeline] { (Tests) 00:02:12.203 [Pipeline] sh 00:02:12.485 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:12.755 [Pipeline] sh 00:02:13.030 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:13.045 [Pipeline] timeout 00:02:13.046 Timeout set to expire in 1 hr 30 min 00:02:13.047 [Pipeline] { 00:02:13.063 [Pipeline] sh 00:02:13.343 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:13.911 HEAD is now at 6c0846996 module/bdev/nvme: add detach-monitor poller 00:02:13.923 [Pipeline] sh 00:02:14.202 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:14.214 [Pipeline] sh 00:02:14.490 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:14.511 [Pipeline] sh 00:02:14.791 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:02:14.791 ++ readlink -f spdk_repo 00:02:14.791 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:14.791 + [[ -n /home/vagrant/spdk_repo ]] 00:02:14.791 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:14.791 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:14.791 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:14.791 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.791 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.791 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:02:14.791 + cd /home/vagrant/spdk_repo 00:02:14.791 + source /etc/os-release 00:02:14.791 ++ NAME=FreeBSD 00:02:14.791 ++ VERSION=14.0-RELEASE 00:02:14.791 ++ VERSION_ID=14.0 00:02:14.791 ++ ID=freebsd 00:02:14.791 ++ ANSI_COLOR='0;31' 00:02:14.791 ++ PRETTY_NAME='FreeBSD 14.0-RELEASE' 00:02:14.791 ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0 00:02:14.791 ++ HOME_URL=https://FreeBSD.org/ 00:02:14.791 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:14.791 + uname -a 00:02:14.791 FreeBSD freebsd-cloud-1718332871-2294.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 00:02:14.791 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:15.050 Contigmem (not present) 00:02:15.050 Buffer Size: not set 00:02:15.050 Num Buffers: not set 00:02:15.050 00:02:15.050 00:02:15.050 Type BDF Vendor Device Driver 00:02:15.050 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:02:15.050 + rm -f /tmp/spdk-ld-path 00:02:15.050 + source autorun-spdk.conf 00:02:15.050 ++ SPDK_TEST_UNITTEST=1 00:02:15.050 ++ SPDK_RUN_VALGRIND=0 00:02:15.050 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.050 ++ SPDK_TEST_NVME=1 00:02:15.050 ++ SPDK_TEST_BLOCKDEV=1 00:02:15.050 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.050 ++ RUN_NIGHTLY=0 00:02:15.050 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:15.050 + [[ -n '' ]] 00:02:15.050 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:15.050 + for M in /var/spdk/build-*-manifest.txt 00:02:15.050 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:15.050 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.050 + for M in /var/spdk/build-*-manifest.txt 00:02:15.050 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:15.050 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.050 ++ uname 00:02:15.050 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:15.050 + dmesg_pid=1231 00:02:15.050 + tail -F /var/log/messages 00:02:15.050 + [[ FreeBSD == FreeBSD ]] 00:02:15.050 + export LC_ALL=C LC_CTYPE=C 00:02:15.050 + LC_ALL=C 00:02:15.050 + LC_CTYPE=C 00:02:15.050 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.050 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.050 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:15.050 + [[ -x /usr/src/fio-static/fio ]] 00:02:15.050 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:15.050 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:15.050 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:15.050 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:15.050 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:15.050 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:15.050 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:15.050 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.050 Test configuration: 00:02:15.050 SPDK_TEST_UNITTEST=1 00:02:15.050 SPDK_RUN_VALGRIND=0 00:02:15.050 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.050 SPDK_TEST_NVME=1 00:02:15.050 SPDK_TEST_BLOCKDEV=1 00:02:15.050 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.309 RUN_NIGHTLY=0 18:16:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:15.309 18:16:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.309 18:16:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.309 18:16:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.309 18:16:07 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:15.309 18:16:07 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:15.309 18:16:07 -- paths/export.sh@4 -- $ export PATH 00:02:15.309 18:16:07 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:15.309 18:16:07 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:15.309 18:16:07 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:15.309 18:16:07 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721067367.XXXXXX 00:02:15.309 18:16:07 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721067367.XXXXXX.bLFJyICrvg 00:02:15.309 18:16:07 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:15.309 18:16:07 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:15.309 18:16:07 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:15.309 18:16:07 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:15.309 18:16:07 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.309 18:16:07 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:15.309 18:16:07 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:15.309 18:16:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.309 18:16:07 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:15.309 18:16:07 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:15.309 18:16:07 -- pm/common@17 -- $ local monitor 00:02:15.309 18:16:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.309 18:16:07 -- pm/common@25 -- $ sleep 1 00:02:15.309 18:16:07 -- pm/common@21 -- $ date +%s 00:02:15.309 18:16:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721067367 00:02:15.309 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721067367_collect-vmstat.pm.log 00:02:16.246 18:16:08 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:16.246 18:16:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.246 18:16:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.246 18:16:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:16.246 18:16:08 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.246 Mon Jul 15 18:16:08 UTC 2024 00:02:16.246 18:16:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.246 v24.09-pre-210-g6c0846996 00:02:16.246 18:16:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:16.246 18:16:08 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:16.246 18:16:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:16.246 18:16:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:16.246 18:16:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:16.246 18:16:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:16.246 18:16:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:16.246 18:16:08 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:16.246 18:16:08 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:16.246 18:16:08 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:02:16.246 18:16:08 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:16.246 18:16:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:16.246 18:16:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.246 ************************************ 00:02:16.246 START TEST unittest_build 00:02:16.246 ************************************ 00:02:16.246 18:16:08 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:02:16.246 18:16:08 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:17.196 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:17.196 are only supported on Linux. Turning off default feature. 00:02:17.196 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:17.196 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:17.761 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:17.761 Using 'verbs' RDMA provider 00:02:28.301 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:38.334 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:38.334 Creating mk/config.mk...done. 00:02:38.334 Creating mk/cc.flags.mk...done. 00:02:38.334 Type 'gmake' to build. 00:02:38.334 18:16:29 unittest_build -- common/autobuild_common.sh@412 -- $ gmake -j10 00:02:38.334 gmake[1]: Nothing to be done for 'all'. 00:02:41.691 ps: stdin: not a terminal 00:02:45.879 The Meson build system 00:02:45.879 Version: 1.4.0 00:02:45.879 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:45.879 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:45.879 Build type: native build 00:02:45.879 Program cat found: YES (/bin/cat) 00:02:45.879 Project name: DPDK 00:02:45.879 Project version: 24.03.0 00:02:45.879 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:02:45.879 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:02:45.879 Host machine cpu family: x86_64 00:02:45.879 Host machine cpu: x86_64 00:02:45.879 Message: ## Building in Developer Mode ## 00:02:45.879 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:02:45.879 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.879 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.879 Program python3 found: YES (/usr/local/bin/python3.9) 00:02:45.879 Program cat found: YES (/bin/cat) 00:02:45.879 Compiler for C supports arguments -march=native: YES 00:02:45.879 Checking for size of "void *" : 8 00:02:45.879 Checking for size of "void *" : 8 (cached) 00:02:45.879 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:45.879 Library m found: YES 00:02:45.879 Library numa found: NO 00:02:45.879 Library fdt found: NO 00:02:45.879 Library execinfo found: YES 00:02:45.879 Has header "execinfo.h" : YES 00:02:45.879 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:02:45.879 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.879 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.879 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.879 Run-time dependency openssl found: YES 3.0.13 00:02:45.879 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:45.879 Library pcap found: YES 00:02:45.879 Has header "pcap.h" with dependency -lpcap: YES 00:02:45.879 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.879 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.879 Compiler for C supports arguments -Wformat: YES 00:02:45.879 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:45.879 Compiler for C supports arguments -Wformat-security: YES 00:02:45.879 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.879 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.879 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.879 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.879 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.879 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.879 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.879 Compiler for C supports arguments -Wundef: YES 00:02:45.879 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.879 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.879 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:45.879 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.879 Compiler for C supports arguments -mavx512f: YES 00:02:45.879 Checking if "AVX512 checking" compiles: YES 00:02:45.879 Fetching value of define "__SSE4_2__" : 1 00:02:45.879 Fetching value of define "__AES__" : 1 00:02:45.879 Fetching value of define "__AVX__" : 1 00:02:45.879 Fetching value of define "__AVX2__" : 1 00:02:45.879 Fetching value of define "__AVX512BW__" : (undefined) 00:02:45.879 Fetching value of define "__AVX512CD__" : (undefined) 00:02:45.879 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:45.879 Fetching value of define "__AVX512F__" : (undefined) 00:02:45.879 Fetching value of define "__AVX512VL__" : (undefined) 00:02:45.879 Fetching value of define "__PCLMUL__" : 1 00:02:45.879 Fetching value of define "__RDRND__" : 1 00:02:45.879 Fetching value of define "__RDSEED__" : 1 00:02:45.879 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:45.879 Fetching value of define "__znver1__" : (undefined) 00:02:45.879 Fetching value of define "__znver2__" : (undefined) 00:02:45.879 Fetching value of define "__znver3__" : (undefined) 00:02:45.879 Fetching value of define "__znver4__" : (undefined) 00:02:45.879 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:45.879 Message: lib/log: Defining dependency "log" 00:02:45.879 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.879 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.879 Checking if "Detect argument count for CPU_OR" compiles: YES 00:02:45.879 Checking for function "getentropy" : YES 00:02:45.879 Message: lib/eal: Defining dependency "eal" 00:02:45.879 Message: lib/ring: Defining dependency "ring" 00:02:45.879 Message: lib/rcu: Defining dependency "rcu" 00:02:45.879 Message: lib/mempool: Defining dependency "mempool" 00:02:45.879 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.880 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.880 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:45.880 Compiler for C supports arguments -mpclmul: YES 00:02:45.880 Compiler for C supports arguments -maes: YES 00:02:45.880 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.880 Compiler for C supports arguments -mavx512bw: YES 00:02:45.880 Compiler for C supports arguments -mavx512dq: YES 00:02:45.880 Compiler for C supports arguments -mavx512vl: YES 00:02:45.880 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.880 Compiler for C supports arguments -mavx2: YES 00:02:45.880 Compiler for C supports arguments -mavx: YES 00:02:45.880 Message: lib/net: Defining dependency "net" 00:02:45.880 Message: lib/meter: Defining dependency "meter" 00:02:45.880 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.880 Message: lib/pci: Defining dependency "pci" 00:02:45.880 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.880 Message: lib/hash: Defining dependency "hash" 00:02:45.880 Message: lib/timer: Defining dependency "timer" 00:02:45.880 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.880 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.880 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.880 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.880 Message: lib/reorder: Defining dependency "reorder" 00:02:45.880 Message: lib/security: Defining dependency "security" 00:02:45.880 Has header "linux/userfaultfd.h" : NO 00:02:45.880 Has header "linux/vduse.h" : NO 00:02:45.880 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:45.880 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.880 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.880 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.880 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.880 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.880 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.880 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:02:45.880 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.880 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.880 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.880 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:45.880 Configuring doxy-api-html.conf using configuration 00:02:45.880 Configuring doxy-api-man.conf using configuration 00:02:45.880 Program mandb found: NO 00:02:45.880 Program sphinx-build found: NO 00:02:45.880 Configuring rte_build_config.h using configuration 00:02:45.880 Message: 00:02:45.880 ================= 00:02:45.880 Applications Enabled 00:02:45.880 ================= 00:02:45.880 00:02:45.880 apps: 00:02:45.880 00:02:45.880 00:02:45.880 Message: 00:02:45.880 ================= 00:02:45.880 Libraries Enabled 00:02:45.880 ================= 00:02:45.880 00:02:45.880 libs: 00:02:45.880 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.880 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.880 cryptodev, dmadev, reorder, security, 00:02:45.880 00:02:45.880 Message: 00:02:45.880 =============== 00:02:45.880 Drivers Enabled 00:02:45.880 =============== 00:02:45.880 00:02:45.880 common: 00:02:45.880 00:02:45.880 bus: 00:02:45.880 pci, vdev, 00:02:45.880 mempool: 00:02:45.880 ring, 00:02:45.880 dma: 00:02:45.880 00:02:45.880 net: 00:02:45.880 00:02:45.880 crypto: 00:02:45.880 00:02:45.880 compress: 00:02:45.880 00:02:45.880 00:02:45.880 Message: 00:02:45.880 ================= 00:02:45.880 Content Skipped 00:02:45.880 ================= 00:02:45.880 00:02:45.880 apps: 00:02:45.880 dumpcap: explicitly disabled via build config 00:02:45.880 graph: explicitly disabled via build config 00:02:45.880 pdump: explicitly disabled via build config 00:02:45.880 proc-info: explicitly disabled via build config 00:02:45.880 test-acl: explicitly disabled via build config 00:02:45.880 test-bbdev: explicitly disabled via build config 00:02:45.880 test-cmdline: explicitly disabled via build config 00:02:45.880 test-compress-perf: explicitly disabled via build config 00:02:45.880 test-crypto-perf: explicitly disabled via build config 00:02:45.880 test-dma-perf: explicitly disabled via build config 00:02:45.880 test-eventdev: explicitly disabled via build config 00:02:45.880 test-fib: explicitly disabled via build config 00:02:45.880 test-flow-perf: explicitly disabled via build config 00:02:45.880 test-gpudev: explicitly disabled via build config 00:02:45.880 test-mldev: explicitly disabled via build config 00:02:45.880 test-pipeline: explicitly disabled via build config 00:02:45.880 test-pmd: explicitly disabled via build config 00:02:45.880 test-regex: explicitly disabled via build config 00:02:45.880 test-sad: explicitly disabled via build config 00:02:45.880 test-security-perf: explicitly disabled via build config 00:02:45.880 00:02:45.880 libs: 00:02:45.880 argparse: explicitly disabled via build config 00:02:45.880 metrics: explicitly disabled via build config 00:02:45.880 acl: explicitly disabled via build config 00:02:45.880 bbdev: explicitly disabled via build config 00:02:45.880 bitratestats: explicitly disabled via build config 00:02:45.880 bpf: explicitly disabled via build config 00:02:45.880 cfgfile: explicitly disabled via build config 00:02:45.880 distributor: explicitly disabled via build config 00:02:45.880 efd: explicitly disabled via build config 00:02:45.880 eventdev: explicitly disabled via build config 00:02:45.880 dispatcher: explicitly disabled via build config 00:02:45.880 gpudev: explicitly disabled via build config 00:02:45.880 gro: explicitly disabled via build config 00:02:45.880 gso: explicitly disabled via build config 00:02:45.880 ip_frag: explicitly disabled via build config 00:02:45.880 jobstats: explicitly disabled via build config 00:02:45.880 latencystats: explicitly disabled via build config 00:02:45.880 lpm: explicitly disabled via build config 00:02:45.880 member: explicitly disabled via build config 00:02:45.880 pcapng: explicitly disabled via build config 00:02:45.880 power: only supported on Linux 00:02:45.880 rawdev: explicitly disabled via build config 00:02:45.880 regexdev: explicitly disabled via build config 00:02:45.880 mldev: explicitly disabled via build config 00:02:45.880 rib: explicitly disabled via build config 00:02:45.880 sched: explicitly disabled via build config 00:02:45.880 stack: explicitly disabled via build config 00:02:45.880 vhost: only supported on Linux 00:02:45.880 ipsec: explicitly disabled via build config 00:02:45.880 pdcp: explicitly disabled via build config 00:02:45.880 fib: explicitly disabled via build config 00:02:45.880 port: explicitly disabled via build config 00:02:45.880 pdump: explicitly disabled via build config 00:02:45.880 table: explicitly disabled via build config 00:02:45.880 pipeline: explicitly disabled via build config 00:02:45.880 graph: explicitly disabled via build config 00:02:45.880 node: explicitly disabled via build config 00:02:45.880 00:02:45.880 drivers: 00:02:45.880 common/cpt: not in enabled drivers build config 00:02:45.880 common/dpaax: not in enabled drivers build config 00:02:45.880 common/iavf: not in enabled drivers build config 00:02:45.880 common/idpf: not in enabled drivers build config 00:02:45.880 common/ionic: not in enabled drivers build config 00:02:45.880 common/mvep: not in enabled drivers build config 00:02:45.880 common/octeontx: not in enabled drivers build config 00:02:45.880 bus/auxiliary: not in enabled drivers build config 00:02:45.880 bus/cdx: not in enabled drivers build config 00:02:45.880 bus/dpaa: not in enabled drivers build config 00:02:45.880 bus/fslmc: not in enabled drivers build config 00:02:45.880 bus/ifpga: not in enabled drivers build config 00:02:45.880 bus/platform: not in enabled drivers build config 00:02:45.880 bus/uacce: not in enabled drivers build config 00:02:45.880 bus/vmbus: not in enabled drivers build config 00:02:45.880 common/cnxk: not in enabled drivers build config 00:02:45.880 common/mlx5: not in enabled drivers build config 00:02:45.880 common/nfp: not in enabled drivers build config 00:02:45.880 common/nitrox: not in enabled drivers build config 00:02:45.880 common/qat: not in enabled drivers build config 00:02:45.880 common/sfc_efx: not in enabled drivers build config 00:02:45.880 mempool/bucket: not in enabled drivers build config 00:02:45.880 mempool/cnxk: not in enabled drivers build config 00:02:45.880 mempool/dpaa: not in enabled drivers build config 00:02:45.880 mempool/dpaa2: not in enabled drivers build config 00:02:45.880 mempool/octeontx: not in enabled drivers build config 00:02:45.880 mempool/stack: not in enabled drivers build config 00:02:45.880 dma/cnxk: not in enabled drivers build config 00:02:45.880 dma/dpaa: not in enabled drivers build config 00:02:45.880 dma/dpaa2: not in enabled drivers build config 00:02:45.880 dma/hisilicon: not in enabled drivers build config 00:02:45.880 dma/idxd: not in enabled drivers build config 00:02:45.880 dma/ioat: not in enabled drivers build config 00:02:45.880 dma/skeleton: not in enabled drivers build config 00:02:45.880 net/af_packet: not in enabled drivers build config 00:02:45.880 net/af_xdp: not in enabled drivers build config 00:02:45.880 net/ark: not in enabled drivers build config 00:02:45.880 net/atlantic: not in enabled drivers build config 00:02:45.880 net/avp: not in enabled drivers build config 00:02:45.880 net/axgbe: not in enabled drivers build config 00:02:45.880 net/bnx2x: not in enabled drivers build config 00:02:45.880 net/bnxt: not in enabled drivers build config 00:02:45.880 net/bonding: not in enabled drivers build config 00:02:45.880 net/cnxk: not in enabled drivers build config 00:02:45.880 net/cpfl: not in enabled drivers build config 00:02:45.880 net/cxgbe: not in enabled drivers build config 00:02:45.880 net/dpaa: not in enabled drivers build config 00:02:45.880 net/dpaa2: not in enabled drivers build config 00:02:45.880 net/e1000: not in enabled drivers build config 00:02:45.880 net/ena: not in enabled drivers build config 00:02:45.880 net/enetc: not in enabled drivers build config 00:02:45.880 net/enetfec: not in enabled drivers build config 00:02:45.880 net/enic: not in enabled drivers build config 00:02:45.880 net/failsafe: not in enabled drivers build config 00:02:45.880 net/fm10k: not in enabled drivers build config 00:02:45.880 net/gve: not in enabled drivers build config 00:02:45.880 net/hinic: not in enabled drivers build config 00:02:45.880 net/hns3: not in enabled drivers build config 00:02:45.880 net/i40e: not in enabled drivers build config 00:02:45.880 net/iavf: not in enabled drivers build config 00:02:45.880 net/ice: not in enabled drivers build config 00:02:45.880 net/idpf: not in enabled drivers build config 00:02:45.880 net/igc: not in enabled drivers build config 00:02:45.880 net/ionic: not in enabled drivers build config 00:02:45.881 net/ipn3ke: not in enabled drivers build config 00:02:45.881 net/ixgbe: not in enabled drivers build config 00:02:45.881 net/mana: not in enabled drivers build config 00:02:45.881 net/memif: not in enabled drivers build config 00:02:45.881 net/mlx4: not in enabled drivers build config 00:02:45.881 net/mlx5: not in enabled drivers build config 00:02:45.881 net/mvneta: not in enabled drivers build config 00:02:45.881 net/mvpp2: not in enabled drivers build config 00:02:45.881 net/netvsc: not in enabled drivers build config 00:02:45.881 net/nfb: not in enabled drivers build config 00:02:45.881 net/nfp: not in enabled drivers build config 00:02:45.881 net/ngbe: not in enabled drivers build config 00:02:45.881 net/null: not in enabled drivers build config 00:02:45.881 net/octeontx: not in enabled drivers build config 00:02:45.881 net/octeon_ep: not in enabled drivers build config 00:02:45.881 net/pcap: not in enabled drivers build config 00:02:45.881 net/pfe: not in enabled drivers build config 00:02:45.881 net/qede: not in enabled drivers build config 00:02:45.881 net/ring: not in enabled drivers build config 00:02:45.881 net/sfc: not in enabled drivers build config 00:02:45.881 net/softnic: not in enabled drivers build config 00:02:45.881 net/tap: not in enabled drivers build config 00:02:45.881 net/thunderx: not in enabled drivers build config 00:02:45.881 net/txgbe: not in enabled drivers build config 00:02:45.881 net/vdev_netvsc: not in enabled drivers build config 00:02:45.881 net/vhost: not in enabled drivers build config 00:02:45.881 net/virtio: not in enabled drivers build config 00:02:45.881 net/vmxnet3: not in enabled drivers build config 00:02:45.881 raw/*: missing internal dependency, "rawdev" 00:02:45.881 crypto/armv8: not in enabled drivers build config 00:02:45.881 crypto/bcmfs: not in enabled drivers build config 00:02:45.881 crypto/caam_jr: not in enabled drivers build config 00:02:45.881 crypto/ccp: not in enabled drivers build config 00:02:45.881 crypto/cnxk: not in enabled drivers build config 00:02:45.881 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.881 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.881 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.881 crypto/mlx5: not in enabled drivers build config 00:02:45.881 crypto/mvsam: not in enabled drivers build config 00:02:45.881 crypto/nitrox: not in enabled drivers build config 00:02:45.881 crypto/null: not in enabled drivers build config 00:02:45.881 crypto/octeontx: not in enabled drivers build config 00:02:45.881 crypto/openssl: not in enabled drivers build config 00:02:45.881 crypto/scheduler: not in enabled drivers build config 00:02:45.881 crypto/uadk: not in enabled drivers build config 00:02:45.881 crypto/virtio: not in enabled drivers build config 00:02:45.881 compress/isal: not in enabled drivers build config 00:02:45.881 compress/mlx5: not in enabled drivers build config 00:02:45.881 compress/nitrox: not in enabled drivers build config 00:02:45.881 compress/octeontx: not in enabled drivers build config 00:02:45.881 compress/zlib: not in enabled drivers build config 00:02:45.881 regex/*: missing internal dependency, "regexdev" 00:02:45.881 ml/*: missing internal dependency, "mldev" 00:02:45.881 vdpa/*: missing internal dependency, "vhost" 00:02:45.881 event/*: missing internal dependency, "eventdev" 00:02:45.881 baseband/*: missing internal dependency, "bbdev" 00:02:45.881 gpu/*: missing internal dependency, "gpudev" 00:02:45.881 00:02:45.881 00:02:45.881 Build targets in project: 81 00:02:45.881 00:02:45.881 DPDK 24.03.0 00:02:45.881 00:02:45.881 User defined options 00:02:45.881 buildtype : debug 00:02:45.881 default_library : static 00:02:45.881 libdir : lib 00:02:45.881 prefix : / 00:02:45.881 c_args : -fPIC -Werror 00:02:45.881 c_link_args : 00:02:45.881 cpu_instruction_set: native 00:02:45.881 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:45.881 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:45.881 enable_docs : false 00:02:45.881 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:45.881 enable_kmods : true 00:02:45.881 max_lcores : 128 00:02:45.881 tests : false 00:02:45.881 00:02:45.881 Found ninja-1.11.1 at /usr/local/bin/ninja 00:02:46.448 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:46.448 [1/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.448 [2/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:02:46.706 [3/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.706 [4/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.706 [5/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.706 [6/233] Linking static target lib/librte_log.a 00:02:46.706 [7/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:46.706 [8/233] Linking static target lib/librte_kvargs.a 00:02:46.964 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.964 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:46.964 [11/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:46.964 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.964 [13/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.964 [14/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.964 [15/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.964 [16/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:46.964 [17/233] Linking static target lib/librte_telemetry.a 00:02:47.222 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:47.222 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:47.481 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:47.481 [21/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:47.481 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:47.481 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:47.481 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:47.481 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.481 [26/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.481 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:47.739 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.739 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.739 [30/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.739 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.739 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.739 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.739 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.739 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.998 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.999 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.999 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.999 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.999 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.999 [41/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.999 [42/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:48.258 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.258 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.258 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.258 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:48.258 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.258 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:48.258 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.549 [50/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.549 [51/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:02:48.549 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.549 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.549 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.814 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.815 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.815 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.815 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.815 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:02:48.815 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:02:48.815 [61/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:02:48.815 [62/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.815 [63/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.815 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.815 [65/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:02:49.073 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:02:49.073 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:02:49.073 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:02:49.332 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:02:49.332 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:02:49.332 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:02:49.332 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.332 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.332 [74/233] Linking static target lib/librte_eal.a 00:02:49.332 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.332 [76/233] Linking static target lib/librte_ring.a 00:02:49.590 [77/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.590 [78/233] Linking static target lib/librte_rcu.a 00:02:49.590 [79/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.590 [80/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.590 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.590 [82/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.874 [83/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.874 [84/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.874 [85/233] Linking target lib/librte_log.so.24.1 00:02:49.874 [86/233] Linking static target lib/librte_mempool.a 00:02:49.874 [87/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.874 [88/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.874 [89/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.874 [90/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:49.874 [91/233] Linking target lib/librte_kvargs.so.24.1 00:02:49.874 [92/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.874 [93/233] Linking target lib/librte_telemetry.so.24.1 00:02:50.133 [94/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.133 [95/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:50.133 [96/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:50.133 [97/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:50.133 [98/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:50.133 [99/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.133 [100/233] Linking static target lib/librte_mbuf.a 00:02:50.133 [101/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:50.133 [102/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.392 [103/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.392 [104/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.392 [105/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.392 [106/233] Linking static target lib/librte_net.a 00:02:50.392 [107/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.392 [108/233] Linking static target lib/librte_meter.a 00:02:50.651 [109/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.651 [110/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.651 [111/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.651 [112/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.651 [113/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.909 [114/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.167 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.167 [116/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.167 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.167 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.425 [119/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.425 [120/233] Linking static target lib/librte_pci.a 00:02:51.425 [121/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.425 [122/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.425 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.425 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.425 [125/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.425 [126/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.425 [127/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.425 [128/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.425 [129/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.425 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.683 [131/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.683 [132/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.683 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.683 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.683 [135/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.683 [136/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.683 [137/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:51.683 [138/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:51.683 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.683 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.941 [141/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.941 [142/233] Linking static target lib/librte_ethdev.a 00:02:51.941 [143/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.941 [144/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.941 [145/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.941 [146/233] Linking static target lib/librte_cmdline.a 00:02:52.200 [147/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:52.200 [148/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:52.200 [149/233] Linking static target lib/librte_timer.a 00:02:52.200 [150/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:52.200 [151/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.200 [152/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.458 [153/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.458 [154/233] Linking static target lib/librte_hash.a 00:02:52.458 [155/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.458 [156/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.717 [157/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.717 [158/233] Linking static target lib/librte_compressdev.a 00:02:52.717 [159/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.717 [160/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.717 [161/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:52.717 [162/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.975 [163/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.975 [164/233] Linking static target lib/librte_dmadev.a 00:02:52.975 [165/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:52.975 [166/233] Linking static target lib/librte_reorder.a 00:02:52.975 [167/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.233 [168/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.233 [169/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:53.233 [170/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.233 [171/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.233 [172/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.233 [173/233] Linking static target lib/librte_cryptodev.a 00:02:53.233 [174/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.233 [175/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.233 [176/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.233 [177/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.233 [178/233] Linking static target lib/librte_security.a 00:02:53.491 [179/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:02:53.491 [180/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:53.491 [181/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.748 [182/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.748 [183/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.748 [184/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.748 [185/233] Linking static target drivers/librte_bus_pci.a 00:02:53.748 [186/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.748 [187/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.748 [188/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.748 [189/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.748 [190/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.748 [191/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.748 [192/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.006 [193/233] Linking static target drivers/librte_bus_vdev.a 00:02:54.006 [194/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.006 [195/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.006 [196/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.006 [197/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.006 [198/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.006 [199/233] Linking static target drivers/librte_mempool_ring.a 00:02:54.006 [200/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.572 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:02:54.572 machine -> /usr/src/sys/amd64/include 00:02:54.572 x86 -> /usr/src/sys/x86/include 00:02:54.572 i386 -> /usr/src/sys/i386/include 00:02:54.572 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:02:54.572 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:02:54.572 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:02:54.572 touch opt_global.h 00:02:54.572 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:02:54.572 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:02:54.572 :> export_syms 00:02:54.572 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:02:54.572 objcopy --strip-debug contigmem.ko 00:02:54.831 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:02:54.831 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:02:54.831 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:02:54.831 :> export_syms 00:02:54.831 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:02:54.831 objcopy --strip-debug nic_uio.ko 00:02:57.364 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.893 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.893 [205/233] Linking target lib/librte_eal.so.24.1 00:02:59.893 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:59.893 [207/233] Linking target lib/librte_dmadev.so.24.1 00:02:59.893 [208/233] Linking target drivers/librte_bus_vdev.so.24.1 00:02:59.893 [209/233] Linking target lib/librte_timer.so.24.1 00:02:59.893 [210/233] Linking target lib/librte_pci.so.24.1 00:02:59.893 [211/233] Linking target lib/librte_ring.so.24.1 00:02:59.893 [212/233] Linking target lib/librte_meter.so.24.1 00:02:59.893 [213/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:59.893 [214/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:00.151 [215/233] Linking target drivers/librte_bus_pci.so.24.1 00:03:00.151 [216/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:00.151 [217/233] Linking target lib/librte_mempool.so.24.1 00:03:00.151 [218/233] Linking target lib/librte_rcu.so.24.1 00:03:00.151 [219/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:00.151 [220/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:00.151 [221/233] Linking target lib/librte_mbuf.so.24.1 00:03:00.151 [222/233] Linking target drivers/librte_mempool_ring.so.24.1 00:03:00.408 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:00.408 [224/233] Linking target lib/librte_net.so.24.1 00:03:00.408 [225/233] Linking target lib/librte_cryptodev.so.24.1 00:03:00.408 [226/233] Linking target lib/librte_reorder.so.24.1 00:03:00.408 [227/233] Linking target lib/librte_compressdev.so.24.1 00:03:00.408 [228/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:00.408 [229/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:00.408 [230/233] Linking target lib/librte_cmdline.so.24.1 00:03:00.408 [231/233] Linking target lib/librte_security.so.24.1 00:03:00.408 [232/233] Linking target lib/librte_hash.so.24.1 00:03:00.408 [233/233] Linking target lib/librte_ethdev.so.24.1 00:03:00.408 INFO: autodetecting backend as ninja 00:03:00.408 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:01.348 CC lib/ut_mock/mock.o 00:03:01.348 CC lib/log/log.o 00:03:01.348 CC lib/log/log_flags.o 00:03:01.348 CC lib/log/log_deprecated.o 00:03:01.348 CC lib/ut/ut.o 00:03:01.348 LIB libspdk_ut_mock.a 00:03:01.348 LIB libspdk_log.a 00:03:01.348 LIB libspdk_ut.a 00:03:01.605 CC lib/ioat/ioat.o 00:03:01.605 CXX lib/trace_parser/trace.o 00:03:01.605 CC lib/dma/dma.o 00:03:01.605 CC lib/util/base64.o 00:03:01.605 CC lib/util/bit_array.o 00:03:01.605 CC lib/util/cpuset.o 00:03:01.605 CC lib/util/crc16.o 00:03:01.605 CC lib/util/crc32.o 00:03:01.605 CC lib/util/crc32c.o 00:03:01.605 CC lib/util/crc32_ieee.o 00:03:01.605 CC lib/util/crc64.o 00:03:01.605 CC lib/util/dif.o 00:03:01.605 CC lib/util/fd.o 00:03:01.605 CC lib/util/file.o 00:03:01.605 LIB libspdk_dma.a 00:03:01.605 CC lib/util/hexlify.o 00:03:01.605 LIB libspdk_ioat.a 00:03:01.605 CC lib/util/iov.o 00:03:01.605 CC lib/util/math.o 00:03:01.605 CC lib/util/pipe.o 00:03:01.605 CC lib/util/strerror_tls.o 00:03:01.605 CC lib/util/string.o 00:03:01.605 CC lib/util/uuid.o 00:03:01.605 CC lib/util/fd_group.o 00:03:01.863 CC lib/util/xor.o 00:03:01.863 CC lib/util/zipf.o 00:03:01.863 LIB libspdk_util.a 00:03:01.863 CC lib/vmd/vmd.o 00:03:01.863 CC lib/vmd/led.o 00:03:01.863 CC lib/rdma_utils/rdma_utils.o 00:03:01.863 CC lib/conf/conf.o 00:03:01.863 CC lib/idxd/idxd.o 00:03:01.863 CC lib/idxd/idxd_user.o 00:03:01.863 CC lib/env_dpdk/env.o 00:03:01.863 CC lib/rdma_provider/common.o 00:03:01.863 CC lib/json/json_parse.o 00:03:02.123 CC lib/env_dpdk/memory.o 00:03:02.123 LIB libspdk_conf.a 00:03:02.123 CC lib/env_dpdk/pci.o 00:03:02.123 CC lib/json/json_util.o 00:03:02.123 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:02.123 CC lib/env_dpdk/init.o 00:03:02.123 LIB libspdk_rdma_utils.a 00:03:02.123 CC lib/env_dpdk/threads.o 00:03:02.123 LIB libspdk_idxd.a 00:03:02.123 LIB libspdk_vmd.a 00:03:02.123 CC lib/json/json_write.o 00:03:02.123 CC lib/env_dpdk/pci_ioat.o 00:03:02.123 CC lib/env_dpdk/pci_virtio.o 00:03:02.123 LIB libspdk_rdma_provider.a 00:03:02.123 CC lib/env_dpdk/pci_vmd.o 00:03:02.123 CC lib/env_dpdk/pci_idxd.o 00:03:02.123 CC lib/env_dpdk/pci_event.o 00:03:02.123 CC lib/env_dpdk/sigbus_handler.o 00:03:02.123 CC lib/env_dpdk/pci_dpdk.o 00:03:02.123 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.123 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.382 LIB libspdk_json.a 00:03:02.382 CC lib/jsonrpc/jsonrpc_server.o 00:03:02.382 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.382 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.382 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.382 LIB libspdk_jsonrpc.a 00:03:02.642 CC lib/rpc/rpc.o 00:03:02.642 LIB libspdk_rpc.a 00:03:02.642 LIB libspdk_env_dpdk.a 00:03:02.642 CC lib/notify/notify.o 00:03:02.642 CC lib/notify/notify_rpc.o 00:03:02.642 CC lib/keyring/keyring.o 00:03:02.642 CC lib/keyring/keyring_rpc.o 00:03:02.642 CC lib/trace/trace.o 00:03:02.642 CC lib/trace/trace_rpc.o 00:03:02.642 CC lib/trace/trace_flags.o 00:03:02.900 LIB libspdk_notify.a 00:03:02.900 LIB libspdk_trace.a 00:03:02.900 LIB libspdk_keyring.a 00:03:02.900 CC lib/sock/sock.o 00:03:02.900 CC lib/sock/sock_rpc.o 00:03:02.900 CC lib/thread/thread.o 00:03:02.900 CC lib/thread/iobuf.o 00:03:03.162 LIB libspdk_trace_parser.a 00:03:03.162 LIB libspdk_sock.a 00:03:03.162 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.162 CC lib/nvme/nvme_ctrlr.o 00:03:03.162 CC lib/nvme/nvme_ns_cmd.o 00:03:03.162 CC lib/nvme/nvme_ns.o 00:03:03.162 CC lib/nvme/nvme_fabric.o 00:03:03.162 CC lib/nvme/nvme_pcie_common.o 00:03:03.162 CC lib/nvme/nvme_pcie.o 00:03:03.162 CC lib/nvme/nvme_qpair.o 00:03:03.162 CC lib/nvme/nvme.o 00:03:03.162 LIB libspdk_thread.a 00:03:03.454 CC lib/nvme/nvme_quirks.o 00:03:03.721 CC lib/nvme/nvme_transport.o 00:03:03.721 CC lib/nvme/nvme_discovery.o 00:03:03.721 CC lib/accel/accel.o 00:03:03.721 CC lib/accel/accel_rpc.o 00:03:03.980 CC lib/blob/blobstore.o 00:03:03.980 CC lib/accel/accel_sw.o 00:03:03.980 CC lib/init/json_config.o 00:03:03.980 CC lib/init/subsystem.o 00:03:03.980 CC lib/blob/request.o 00:03:03.980 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.980 CC lib/init/subsystem_rpc.o 00:03:03.980 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.980 CC lib/nvme/nvme_tcp.o 00:03:03.981 CC lib/blob/zeroes.o 00:03:03.981 LIB libspdk_accel.a 00:03:03.981 CC lib/blob/blob_bs_dev.o 00:03:03.981 CC lib/init/rpc.o 00:03:03.981 CC lib/nvme/nvme_opal.o 00:03:04.239 LIB libspdk_init.a 00:03:04.239 CC lib/bdev/bdev.o 00:03:04.239 CC lib/bdev/bdev_rpc.o 00:03:04.239 CC lib/bdev/bdev_zone.o 00:03:04.239 CC lib/bdev/part.o 00:03:04.239 CC lib/nvme/nvme_io_msg.o 00:03:04.239 CC lib/event/app.o 00:03:04.239 CC lib/bdev/scsi_nvme.o 00:03:04.239 CC lib/event/reactor.o 00:03:04.497 CC lib/nvme/nvme_poll_group.o 00:03:04.497 LIB libspdk_blob.a 00:03:04.497 CC lib/event/log_rpc.o 00:03:04.497 CC lib/nvme/nvme_zns.o 00:03:04.497 CC lib/event/app_rpc.o 00:03:04.497 CC lib/blobfs/blobfs.o 00:03:04.497 CC lib/event/scheduler_static.o 00:03:04.497 CC lib/lvol/lvol.o 00:03:04.755 CC lib/blobfs/tree.o 00:03:04.755 CC lib/nvme/nvme_stubs.o 00:03:04.755 LIB libspdk_event.a 00:03:04.755 CC lib/nvme/nvme_auth.o 00:03:04.755 CC lib/nvme/nvme_rdma.o 00:03:04.755 LIB libspdk_bdev.a 00:03:04.756 LIB libspdk_blobfs.a 00:03:04.756 LIB libspdk_lvol.a 00:03:04.756 CC lib/scsi/dev.o 00:03:04.756 CC lib/scsi/lun.o 00:03:04.756 CC lib/scsi/port.o 00:03:04.756 CC lib/scsi/scsi.o 00:03:04.756 CC lib/scsi/scsi_bdev.o 00:03:05.014 CC lib/scsi/scsi_pr.o 00:03:05.014 CC lib/scsi/scsi_rpc.o 00:03:05.014 CC lib/scsi/task.o 00:03:05.014 LIB libspdk_scsi.a 00:03:05.273 CC lib/iscsi/conn.o 00:03:05.273 CC lib/iscsi/init_grp.o 00:03:05.273 CC lib/iscsi/md5.o 00:03:05.273 CC lib/iscsi/iscsi.o 00:03:05.273 CC lib/iscsi/param.o 00:03:05.273 CC lib/iscsi/portal_grp.o 00:03:05.273 CC lib/iscsi/tgt_node.o 00:03:05.273 CC lib/iscsi/iscsi_subsystem.o 00:03:05.273 CC lib/iscsi/iscsi_rpc.o 00:03:05.273 CC lib/iscsi/task.o 00:03:05.273 LIB libspdk_nvme.a 00:03:05.531 CC lib/nvmf/ctrlr.o 00:03:05.531 CC lib/nvmf/ctrlr_discovery.o 00:03:05.531 CC lib/nvmf/ctrlr_bdev.o 00:03:05.531 CC lib/nvmf/subsystem.o 00:03:05.531 CC lib/nvmf/nvmf.o 00:03:05.531 CC lib/nvmf/nvmf_rpc.o 00:03:05.531 CC lib/nvmf/transport.o 00:03:05.531 CC lib/nvmf/tcp.o 00:03:05.531 CC lib/nvmf/stubs.o 00:03:05.531 LIB libspdk_iscsi.a 00:03:05.531 CC lib/nvmf/mdns_server.o 00:03:05.531 CC lib/nvmf/rdma.o 00:03:05.531 CC lib/nvmf/auth.o 00:03:06.146 LIB libspdk_nvmf.a 00:03:06.146 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.146 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.146 CC module/blob/bdev/blob_bdev.o 00:03:06.146 CC module/sock/posix/posix.o 00:03:06.146 CC module/accel/iaa/accel_iaa.o 00:03:06.146 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.146 CC module/accel/ioat/accel_ioat.o 00:03:06.146 CC module/keyring/file/keyring.o 00:03:06.146 CC module/accel/error/accel_error.o 00:03:06.146 CC module/accel/dsa/accel_dsa.o 00:03:06.405 LIB libspdk_env_dpdk_rpc.a 00:03:06.405 CC module/keyring/file/keyring_rpc.o 00:03:06.405 LIB libspdk_scheduler_dynamic.a 00:03:06.405 CC module/accel/error/accel_error_rpc.o 00:03:06.405 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.405 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.405 LIB libspdk_blob_bdev.a 00:03:06.405 LIB libspdk_accel_iaa.a 00:03:06.405 LIB libspdk_keyring_file.a 00:03:06.405 LIB libspdk_accel_ioat.a 00:03:06.405 LIB libspdk_accel_dsa.a 00:03:06.405 LIB libspdk_accel_error.a 00:03:06.405 CC module/blobfs/bdev/blobfs_bdev.o 00:03:06.405 CC module/bdev/error/vbdev_error.o 00:03:06.405 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.405 CC module/bdev/delay/vbdev_delay.o 00:03:06.405 CC module/bdev/gpt/gpt.o 00:03:06.406 CC module/bdev/null/bdev_null.o 00:03:06.406 CC module/bdev/malloc/bdev_malloc.o 00:03:06.406 CC module/bdev/nvme/bdev_nvme.o 00:03:06.406 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.663 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:06.663 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.663 LIB libspdk_sock_posix.a 00:03:06.663 CC module/bdev/null/bdev_null_rpc.o 00:03:06.663 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.663 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.663 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.663 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.663 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.663 LIB libspdk_blobfs_bdev.a 00:03:06.663 LIB libspdk_bdev_error.a 00:03:06.663 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.663 CC module/bdev/nvme/nvme_rpc.o 00:03:06.663 CC module/bdev/nvme/bdev_mdns_client.o 00:03:06.663 LIB libspdk_bdev_delay.a 00:03:06.663 LIB libspdk_bdev_null.a 00:03:06.663 LIB libspdk_bdev_passthru.a 00:03:06.663 LIB libspdk_bdev_gpt.a 00:03:06.663 CC module/bdev/raid/bdev_raid.o 00:03:06.663 CC module/bdev/split/vbdev_split.o 00:03:06.663 LIB libspdk_bdev_malloc.a 00:03:06.922 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.922 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.922 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.922 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.922 CC module/bdev/aio/bdev_aio.o 00:03:06.922 LIB libspdk_bdev_lvol.a 00:03:06.922 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.922 CC module/bdev/aio/bdev_aio_rpc.o 00:03:06.922 CC module/bdev/raid/raid0.o 00:03:06.922 CC module/bdev/raid/raid1.o 00:03:06.922 CC module/bdev/raid/concat.o 00:03:06.922 LIB libspdk_bdev_zone_block.a 00:03:06.922 LIB libspdk_bdev_split.a 00:03:06.922 LIB libspdk_bdev_aio.a 00:03:06.922 LIB libspdk_bdev_raid.a 00:03:07.181 LIB libspdk_bdev_nvme.a 00:03:07.439 CC module/event/subsystems/vmd/vmd.o 00:03:07.439 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:07.439 CC module/event/subsystems/scheduler/scheduler.o 00:03:07.439 CC module/event/subsystems/iobuf/iobuf.o 00:03:07.439 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:07.439 CC module/event/subsystems/sock/sock.o 00:03:07.439 CC module/event/subsystems/keyring/keyring.o 00:03:07.439 LIB libspdk_event_keyring.a 00:03:07.439 LIB libspdk_event_vmd.a 00:03:07.439 LIB libspdk_event_scheduler.a 00:03:07.439 LIB libspdk_event_sock.a 00:03:07.439 LIB libspdk_event_iobuf.a 00:03:07.439 CC module/event/subsystems/accel/accel.o 00:03:07.697 LIB libspdk_event_accel.a 00:03:07.697 CC module/event/subsystems/bdev/bdev.o 00:03:07.955 LIB libspdk_event_bdev.a 00:03:07.955 CC module/event/subsystems/scsi/scsi.o 00:03:07.955 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:07.955 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:07.955 LIB libspdk_event_scsi.a 00:03:08.212 LIB libspdk_event_nvmf.a 00:03:08.212 CC module/event/subsystems/iscsi/iscsi.o 00:03:08.212 LIB libspdk_event_iscsi.a 00:03:08.470 CC test/rpc_client/rpc_client_test.o 00:03:08.470 CXX app/trace/trace.o 00:03:08.470 TEST_HEADER include/spdk/accel.h 00:03:08.470 TEST_HEADER include/spdk/accel_module.h 00:03:08.470 TEST_HEADER include/spdk/assert.h 00:03:08.470 TEST_HEADER include/spdk/barrier.h 00:03:08.470 TEST_HEADER include/spdk/base64.h 00:03:08.470 TEST_HEADER include/spdk/bdev.h 00:03:08.470 TEST_HEADER include/spdk/bdev_module.h 00:03:08.470 TEST_HEADER include/spdk/bdev_zone.h 00:03:08.470 TEST_HEADER include/spdk/bit_array.h 00:03:08.470 TEST_HEADER include/spdk/bit_pool.h 00:03:08.470 TEST_HEADER include/spdk/blob.h 00:03:08.470 TEST_HEADER include/spdk/blob_bdev.h 00:03:08.470 TEST_HEADER include/spdk/blobfs.h 00:03:08.470 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:08.470 TEST_HEADER include/spdk/conf.h 00:03:08.470 TEST_HEADER include/spdk/config.h 00:03:08.470 TEST_HEADER include/spdk/cpuset.h 00:03:08.470 TEST_HEADER include/spdk/crc16.h 00:03:08.470 TEST_HEADER include/spdk/crc32.h 00:03:08.470 TEST_HEADER include/spdk/crc64.h 00:03:08.470 CC examples/util/zipf/zipf.o 00:03:08.470 TEST_HEADER include/spdk/dif.h 00:03:08.470 TEST_HEADER include/spdk/dma.h 00:03:08.470 TEST_HEADER include/spdk/endian.h 00:03:08.470 TEST_HEADER include/spdk/env.h 00:03:08.470 TEST_HEADER include/spdk/env_dpdk.h 00:03:08.470 TEST_HEADER include/spdk/event.h 00:03:08.470 TEST_HEADER include/spdk/fd.h 00:03:08.470 TEST_HEADER include/spdk/fd_group.h 00:03:08.470 TEST_HEADER include/spdk/file.h 00:03:08.470 CC examples/ioat/perf/perf.o 00:03:08.470 CC test/thread/poller_perf/poller_perf.o 00:03:08.470 TEST_HEADER include/spdk/ftl.h 00:03:08.470 TEST_HEADER include/spdk/gpt_spec.h 00:03:08.470 TEST_HEADER include/spdk/hexlify.h 00:03:08.470 TEST_HEADER include/spdk/histogram_data.h 00:03:08.470 TEST_HEADER include/spdk/idxd.h 00:03:08.470 TEST_HEADER include/spdk/idxd_spec.h 00:03:08.470 TEST_HEADER include/spdk/init.h 00:03:08.470 TEST_HEADER include/spdk/ioat.h 00:03:08.470 CC test/dma/test_dma/test_dma.o 00:03:08.470 TEST_HEADER include/spdk/ioat_spec.h 00:03:08.470 TEST_HEADER include/spdk/iscsi_spec.h 00:03:08.470 TEST_HEADER include/spdk/json.h 00:03:08.470 TEST_HEADER include/spdk/jsonrpc.h 00:03:08.470 TEST_HEADER include/spdk/keyring.h 00:03:08.470 TEST_HEADER include/spdk/keyring_module.h 00:03:08.470 TEST_HEADER include/spdk/likely.h 00:03:08.470 TEST_HEADER include/spdk/log.h 00:03:08.470 TEST_HEADER include/spdk/lvol.h 00:03:08.470 TEST_HEADER include/spdk/memory.h 00:03:08.470 TEST_HEADER include/spdk/mmio.h 00:03:08.470 TEST_HEADER include/spdk/nbd.h 00:03:08.470 TEST_HEADER include/spdk/notify.h 00:03:08.470 TEST_HEADER include/spdk/nvme.h 00:03:08.470 CC test/env/mem_callbacks/mem_callbacks.o 00:03:08.470 TEST_HEADER include/spdk/nvme_intel.h 00:03:08.470 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:08.470 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:08.470 CC test/app/bdev_svc/bdev_svc.o 00:03:08.470 TEST_HEADER include/spdk/nvme_spec.h 00:03:08.470 TEST_HEADER include/spdk/nvme_zns.h 00:03:08.470 TEST_HEADER include/spdk/nvmf.h 00:03:08.470 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:08.470 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:08.470 TEST_HEADER include/spdk/nvmf_spec.h 00:03:08.470 TEST_HEADER include/spdk/nvmf_transport.h 00:03:08.470 TEST_HEADER include/spdk/opal.h 00:03:08.470 TEST_HEADER include/spdk/opal_spec.h 00:03:08.470 TEST_HEADER include/spdk/pci_ids.h 00:03:08.470 TEST_HEADER include/spdk/pipe.h 00:03:08.470 TEST_HEADER include/spdk/queue.h 00:03:08.470 TEST_HEADER include/spdk/reduce.h 00:03:08.470 TEST_HEADER include/spdk/rpc.h 00:03:08.470 TEST_HEADER include/spdk/scheduler.h 00:03:08.470 TEST_HEADER include/spdk/scsi.h 00:03:08.470 TEST_HEADER include/spdk/scsi_spec.h 00:03:08.470 TEST_HEADER include/spdk/sock.h 00:03:08.470 TEST_HEADER include/spdk/stdinc.h 00:03:08.470 TEST_HEADER include/spdk/string.h 00:03:08.470 TEST_HEADER include/spdk/thread.h 00:03:08.470 TEST_HEADER include/spdk/trace.h 00:03:08.470 TEST_HEADER include/spdk/trace_parser.h 00:03:08.470 TEST_HEADER include/spdk/tree.h 00:03:08.470 TEST_HEADER include/spdk/ublk.h 00:03:08.470 TEST_HEADER include/spdk/util.h 00:03:08.470 TEST_HEADER include/spdk/uuid.h 00:03:08.470 TEST_HEADER include/spdk/version.h 00:03:08.470 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:08.470 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:08.470 TEST_HEADER include/spdk/vhost.h 00:03:08.470 TEST_HEADER include/spdk/vmd.h 00:03:08.470 TEST_HEADER include/spdk/xor.h 00:03:08.470 TEST_HEADER include/spdk/zipf.h 00:03:08.470 CXX test/cpp_headers/accel.o 00:03:08.470 LINK rpc_client_test 00:03:08.470 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:08.470 LINK ioat_perf 00:03:08.470 LINK poller_perf 00:03:08.470 LINK zipf 00:03:08.470 LINK histogram_ut 00:03:08.470 LINK bdev_svc 00:03:08.753 CXX test/cpp_headers/accel_module.o 00:03:08.753 CC examples/ioat/verify/verify.o 00:03:08.753 CC test/thread/lock/spdk_lock.o 00:03:08.753 LINK test_dma 00:03:08.753 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:08.753 LINK verify 00:03:08.753 CC app/trace_record/trace_record.o 00:03:08.753 CC test/unit/lib/log/log.c/log_ut.o 00:03:08.753 CC examples/thread/thread/thread_ex.o 00:03:08.753 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:08.753 CXX test/cpp_headers/assert.o 00:03:08.753 LINK nvme_fuzz 00:03:09.021 CC examples/sock/hello_world/hello_sock.o 00:03:09.021 LINK spdk_trace_record 00:03:09.021 LINK log_ut 00:03:09.021 LINK spdk_lock 00:03:09.021 CXX test/cpp_headers/barrier.o 00:03:09.021 LINK thread 00:03:09.021 CC examples/vmd/lsvmd/lsvmd.o 00:03:09.021 LINK hello_sock 00:03:09.021 CC test/app/histogram_perf/histogram_perf.o 00:03:09.021 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:09.021 CC test/nvme/aer/aer.o 00:03:09.021 LINK mem_callbacks 00:03:09.021 LINK lsvmd 00:03:09.021 CC test/env/vtophys/vtophys.o 00:03:09.021 CXX test/cpp_headers/base64.o 00:03:09.021 LINK histogram_perf 00:03:09.278 CC app/nvmf_tgt/nvmf_main.o 00:03:09.278 CC examples/vmd/led/led.o 00:03:09.278 LINK aer 00:03:09.278 LINK vtophys 00:03:09.278 CC examples/idxd/perf/perf.o 00:03:09.278 LINK iscsi_fuzz 00:03:09.278 LINK spdk_trace 00:03:09.278 LINK led 00:03:09.278 CXX test/cpp_headers/bdev.o 00:03:09.278 CC test/accel/dif/dif.o 00:03:09.278 LINK nvmf_tgt 00:03:09.278 LINK common_ut 00:03:09.278 CXX test/cpp_headers/bdev_module.o 00:03:09.278 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:09.278 CC test/nvme/reset/reset.o 00:03:09.278 CC test/app/jsoncat/jsoncat.o 00:03:09.278 LINK idxd_perf 00:03:09.278 LINK env_dpdk_post_init 00:03:09.535 LINK jsoncat 00:03:09.535 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.535 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:09.535 LINK reset 00:03:09.535 LINK dif 00:03:09.535 CC test/blobfs/mkfs/mkfs.o 00:03:09.535 CC test/app/stub/stub.o 00:03:09.535 CC examples/accel/perf/accel_perf.o 00:03:09.535 CXX test/cpp_headers/bdev_zone.o 00:03:09.535 CC test/env/memory/memory_ut.o 00:03:09.535 LINK base64_ut 00:03:09.535 CC test/event/event_perf/event_perf.o 00:03:09.535 LINK iscsi_tgt 00:03:09.535 CC test/nvme/sgl/sgl.o 00:03:09.535 CXX test/cpp_headers/bit_array.o 00:03:09.535 LINK mkfs 00:03:09.535 LINK stub 00:03:09.535 LINK event_perf 00:03:09.535 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:09.535 LINK accel_perf 00:03:09.535 LINK sgl 00:03:09.793 CC test/event/reactor/reactor.o 00:03:09.793 CC test/nvme/e2edp/nvme_dp.o 00:03:09.793 CXX test/cpp_headers/bit_pool.o 00:03:09.793 CC app/spdk_tgt/spdk_tgt.o 00:03:09.793 gmake[2]: Nothing to be done for 'all'. 00:03:09.793 CC app/spdk_lspci/spdk_lspci.o 00:03:09.793 LINK reactor 00:03:09.793 CC test/nvme/overhead/overhead.o 00:03:09.793 CC test/env/pci/pci_ut.o 00:03:09.793 LINK bit_array_ut 00:03:09.793 CC examples/blob/hello_world/hello_blob.o 00:03:09.793 LINK nvme_dp 00:03:09.793 LINK spdk_lspci 00:03:09.793 LINK spdk_tgt 00:03:09.793 CC test/event/reactor_perf/reactor_perf.o 00:03:09.793 CXX test/cpp_headers/blob.o 00:03:09.793 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:10.051 LINK overhead 00:03:10.051 LINK reactor_perf 00:03:10.051 LINK pci_ut 00:03:10.051 CC test/nvme/err_injection/err_injection.o 00:03:10.051 LINK hello_blob 00:03:10.051 CC app/spdk_nvme_perf/perf.o 00:03:10.051 LINK cpuset_ut 00:03:10.051 CXX test/cpp_headers/blob_bdev.o 00:03:10.051 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:10.051 LINK err_injection 00:03:10.051 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:10.051 CC examples/nvme/hello_world/hello_world.o 00:03:10.051 CC examples/blob/cli/blobcli.o 00:03:10.051 CC examples/bdev/hello_world/hello_bdev.o 00:03:10.051 CC app/spdk_nvme_identify/identify.o 00:03:10.308 LINK crc16_ut 00:03:10.308 CC test/nvme/startup/startup.o 00:03:10.308 LINK hello_world 00:03:10.308 CXX test/cpp_headers/blobfs.o 00:03:10.308 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:10.308 LINK hello_bdev 00:03:10.308 LINK memory_ut 00:03:10.308 LINK dma_ut 00:03:10.308 LINK spdk_nvme_perf 00:03:10.308 LINK blobcli 00:03:10.308 CC examples/nvme/reconnect/reconnect.o 00:03:10.308 LINK startup 00:03:10.308 CXX test/cpp_headers/blobfs_bdev.o 00:03:10.308 LINK crc32_ieee_ut 00:03:10.308 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:10.308 LINK spdk_nvme_identify 00:03:10.308 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:10.566 CC test/nvme/reserve/reserve.o 00:03:10.566 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:10.566 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.566 LINK crc32c_ut 00:03:10.566 CC test/bdev/bdevio/bdevio.o 00:03:10.566 LINK reconnect 00:03:10.566 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:10.566 LINK crc64_ut 00:03:10.566 CXX test/cpp_headers/conf.o 00:03:10.566 CC test/nvme/simple_copy/simple_copy.o 00:03:10.566 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.566 LINK reserve 00:03:10.566 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:10.566 CC test/nvme/connect_stress/connect_stress.o 00:03:10.566 LINK nvme_manage 00:03:10.566 LINK spdk_nvme_discover 00:03:10.566 LINK simple_copy 00:03:10.566 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:10.566 LINK bdevio 00:03:10.566 LINK ioat_ut 00:03:10.566 CXX test/cpp_headers/config.o 00:03:10.566 LINK connect_stress 00:03:10.566 CXX test/cpp_headers/cpuset.o 00:03:10.823 LINK bdevperf 00:03:10.823 CXX test/cpp_headers/crc16.o 00:03:10.824 CC examples/nvme/arbitration/arbitration.o 00:03:10.824 CC app/spdk_top/spdk_top.o 00:03:10.824 LINK iov_ut 00:03:10.824 CC test/nvme/boot_partition/boot_partition.o 00:03:10.824 CXX test/cpp_headers/crc32.o 00:03:10.824 CC examples/nvme/hotplug/hotplug.o 00:03:10.824 CC app/fio/nvme/fio_plugin.o 00:03:10.824 LINK boot_partition 00:03:10.824 LINK arbitration 00:03:10.824 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:10.824 CC test/unit/lib/util/math.c/math_ut.o 00:03:10.824 LINK dif_ut 00:03:11.081 LINK hotplug 00:03:11.081 CXX test/cpp_headers/crc64.o 00:03:11.081 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:11.081 CC test/nvme/compliance/nvme_compliance.o 00:03:11.081 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.081 LINK math_ut 00:03:11.081 LINK cmb_copy 00:03:11.081 CC examples/nvme/abort/abort.o 00:03:11.081 CXX test/cpp_headers/dif.o 00:03:11.081 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:11.081 struct spdk_nvme_fdp_ruhs ruhs; 00:03:11.081 ^ 00:03:11.081 LINK spdk_top 00:03:11.081 LINK fused_ordering 00:03:11.081 CC app/fio/bdev/fio_plugin.o 00:03:11.081 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.081 1 warning generated. 00:03:11.081 LINK spdk_nvme 00:03:11.081 CXX test/cpp_headers/dma.o 00:03:11.081 LINK abort 00:03:11.081 LINK nvme_compliance 00:03:11.081 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:11.081 LINK pipe_ut 00:03:11.081 CC test/unit/lib/util/string.c/string_ut.o 00:03:11.373 CC test/nvme/fdp/fdp.o 00:03:11.373 CXX test/cpp_headers/endian.o 00:03:11.373 LINK doorbell_aers 00:03:11.373 CXX test/cpp_headers/env.o 00:03:11.373 CXX test/cpp_headers/env_dpdk.o 00:03:11.373 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:11.373 LINK pmr_persistence 00:03:11.373 CXX test/cpp_headers/event.o 00:03:11.373 CXX test/cpp_headers/fd.o 00:03:11.373 LINK spdk_bdev 00:03:11.373 LINK string_ut 00:03:11.373 LINK fdp 00:03:11.373 CXX test/cpp_headers/fd_group.o 00:03:11.373 CXX test/cpp_headers/file.o 00:03:11.373 CXX test/cpp_headers/ftl.o 00:03:11.373 CXX test/cpp_headers/gpt_spec.o 00:03:11.373 CXX test/cpp_headers/hexlify.o 00:03:11.373 CXX test/cpp_headers/histogram_data.o 00:03:11.373 LINK xor_ut 00:03:11.373 CC examples/nvmf/nvmf/nvmf.o 00:03:11.631 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:11.631 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:11.631 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:11.631 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:11.631 CXX test/cpp_headers/idxd.o 00:03:11.631 CXX test/cpp_headers/idxd_spec.o 00:03:11.631 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:11.631 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:11.631 CXX test/cpp_headers/init.o 00:03:11.631 LINK nvmf 00:03:11.631 LINK pci_event_ut 00:03:11.631 CXX test/cpp_headers/ioat.o 00:03:11.631 CXX test/cpp_headers/ioat_spec.o 00:03:11.631 CXX test/cpp_headers/iscsi_spec.o 00:03:11.631 CXX test/cpp_headers/json.o 00:03:11.631 CXX test/cpp_headers/jsonrpc.o 00:03:11.631 LINK idxd_user_ut 00:03:11.889 LINK json_util_ut 00:03:11.889 CXX test/cpp_headers/keyring.o 00:03:11.889 CXX test/cpp_headers/keyring_module.o 00:03:11.889 LINK idxd_ut 00:03:11.889 CXX test/cpp_headers/likely.o 00:03:11.889 CXX test/cpp_headers/log.o 00:03:11.889 CXX test/cpp_headers/lvol.o 00:03:11.889 CXX test/cpp_headers/memory.o 00:03:11.889 CXX test/cpp_headers/mmio.o 00:03:11.889 CXX test/cpp_headers/nbd.o 00:03:11.889 CXX test/cpp_headers/notify.o 00:03:11.889 CXX test/cpp_headers/nvme.o 00:03:11.889 LINK json_write_ut 00:03:11.890 CXX test/cpp_headers/nvme_intel.o 00:03:11.890 CXX test/cpp_headers/nvme_ocssd.o 00:03:11.890 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:12.149 CXX test/cpp_headers/nvme_spec.o 00:03:12.149 CXX test/cpp_headers/nvme_zns.o 00:03:12.149 CXX test/cpp_headers/nvmf.o 00:03:12.149 LINK json_parse_ut 00:03:12.149 CXX test/cpp_headers/nvmf_cmd.o 00:03:12.149 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:12.149 CXX test/cpp_headers/nvmf_spec.o 00:03:12.149 CXX test/cpp_headers/nvmf_transport.o 00:03:12.149 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:12.149 CXX test/cpp_headers/opal.o 00:03:12.149 CXX test/cpp_headers/opal_spec.o 00:03:12.149 CXX test/cpp_headers/pci_ids.o 00:03:12.149 CXX test/cpp_headers/pipe.o 00:03:12.149 CXX test/cpp_headers/queue.o 00:03:12.149 CXX test/cpp_headers/reduce.o 00:03:12.149 CXX test/cpp_headers/rpc.o 00:03:12.149 CXX test/cpp_headers/scheduler.o 00:03:12.149 CXX test/cpp_headers/scsi.o 00:03:12.406 LINK jsonrpc_server_ut 00:03:12.406 CXX test/cpp_headers/scsi_spec.o 00:03:12.406 CXX test/cpp_headers/sock.o 00:03:12.406 CXX test/cpp_headers/stdinc.o 00:03:12.406 CXX test/cpp_headers/string.o 00:03:12.406 CXX test/cpp_headers/thread.o 00:03:12.406 CXX test/cpp_headers/trace.o 00:03:12.406 CXX test/cpp_headers/trace_parser.o 00:03:12.406 CXX test/cpp_headers/tree.o 00:03:12.406 CXX test/cpp_headers/ublk.o 00:03:12.406 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:12.406 CXX test/cpp_headers/util.o 00:03:12.406 CXX test/cpp_headers/uuid.o 00:03:12.406 CXX test/cpp_headers/version.o 00:03:12.406 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.406 CXX test/cpp_headers/vfio_user_spec.o 00:03:12.406 CXX test/cpp_headers/vhost.o 00:03:12.663 CXX test/cpp_headers/vmd.o 00:03:12.663 CXX test/cpp_headers/xor.o 00:03:12.663 CXX test/cpp_headers/zipf.o 00:03:12.663 LINK rpc_ut 00:03:12.921 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:12.921 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:12.921 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:12.921 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:12.921 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:12.921 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:13.180 LINK keyring_ut 00:03:13.180 LINK iobuf_ut 00:03:13.180 LINK notify_ut 00:03:13.180 LINK posix_ut 00:03:13.180 LINK thread_ut 00:03:13.437 LINK sock_ut 00:03:13.437 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:13.437 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:13.437 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:13.437 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:13.437 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:13.437 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:13.437 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:13.437 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:13.437 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:13.437 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:13.695 LINK rpc_ut 00:03:13.695 LINK subsystem_ut 00:03:13.695 LINK blob_bdev_ut 00:03:13.695 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:13.695 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:13.695 CC test/unit/lib/event/app.c/app_ut.o 00:03:13.952 LINK app_ut 00:03:14.250 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:14.250 LINK accel_ut 00:03:14.250 LINK nvme_ns_ut 00:03:14.250 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:14.250 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:14.250 LINK nvme_ctrlr_cmd_ut 00:03:14.250 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:14.250 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:14.250 LINK nvme_ut 00:03:14.250 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:14.250 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:14.508 LINK reactor_ut 00:03:14.508 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:14.508 LINK nvme_ctrlr_ut 00:03:14.508 LINK nvme_ns_cmd_ut 00:03:14.508 LINK nvme_ns_ocssd_cmd_ut 00:03:14.765 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:14.765 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:14.765 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:14.765 LINK scsi_nvme_ut 00:03:14.765 LINK nvme_poll_group_ut 00:03:14.765 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:15.022 LINK nvme_qpair_ut 00:03:15.022 LINK gpt_ut 00:03:15.022 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:15.022 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:15.022 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:15.022 LINK nvme_quirks_ut 00:03:15.022 LINK nvme_pcie_ut 00:03:15.022 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:15.022 LINK blob_ut 00:03:15.022 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:15.279 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:15.279 LINK vbdev_lvol_ut 00:03:15.279 LINK bdev_zone_ut 00:03:15.279 LINK part_ut 00:03:15.279 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:15.279 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:15.279 LINK bdev_raid_sb_ut 00:03:15.279 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:15.536 LINK tree_ut 00:03:15.536 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:15.536 LINK bdev_ut 00:03:15.536 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:15.536 LINK bdev_raid_ut 00:03:15.536 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:15.536 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:15.793 LINK nvme_transport_ut 00:03:15.793 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:15.793 LINK vbdev_zone_block_ut 00:03:15.793 LINK concat_ut 00:03:15.793 LINK nvme_io_msg_ut 00:03:15.793 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:15.793 LINK bdev_ut 00:03:16.050 LINK blobfs_async_ut 00:03:16.050 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:16.050 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:16.050 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:16.050 LINK lvol_ut 00:03:16.050 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:16.050 LINK nvme_tcp_ut 00:03:16.050 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:16.050 LINK nvme_pcie_common_ut 00:03:16.050 LINK blobfs_sync_ut 00:03:16.050 LINK nvme_opal_ut 00:03:16.306 LINK blobfs_bdev_ut 00:03:16.306 LINK raid1_ut 00:03:16.306 LINK raid0_ut 00:03:16.306 LINK nvme_fabric_ut 00:03:16.869 LINK nvme_rdma_ut 00:03:17.125 LINK bdev_nvme_ut 00:03:17.383 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:17.383 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:17.383 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:17.383 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:17.383 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:17.383 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:17.383 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:17.383 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:17.383 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:17.383 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:17.640 LINK scsi_ut 00:03:17.640 LINK dev_ut 00:03:17.640 LINK scsi_pr_ut 00:03:17.640 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:17.640 LINK lun_ut 00:03:17.640 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:17.640 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:17.640 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:17.640 LINK ctrlr_bdev_ut 00:03:17.897 LINK scsi_bdev_ut 00:03:17.897 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:17.897 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:18.154 LINK subsystem_ut 00:03:18.154 LINK auth_ut 00:03:18.154 LINK init_grp_ut 00:03:18.154 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:18.154 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:18.154 LINK ctrlr_discovery_ut 00:03:18.154 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:18.154 LINK nvmf_ut 00:03:18.154 LINK ctrlr_ut 00:03:18.154 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:18.413 LINK conn_ut 00:03:18.413 LINK tcp_ut 00:03:18.413 LINK rdma_ut 00:03:18.413 LINK param_ut 00:03:18.413 LINK transport_ut 00:03:18.671 LINK portal_grp_ut 00:03:18.671 LINK tgt_node_ut 00:03:18.928 LINK iscsi_ut 00:03:18.928 00:03:18.928 real 1m2.494s 00:03:18.928 user 4m26.031s 00:03:18.928 sys 0m46.956s 00:03:18.928 18:17:11 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:18.928 ************************************ 00:03:18.928 END TEST unittest_build 00:03:18.928 ************************************ 00:03:18.928 18:17:11 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:18.928 18:17:11 -- common/autotest_common.sh@1142 -- $ return 0 00:03:18.928 18:17:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:18.928 18:17:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:18.928 18:17:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:18.928 18:17:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.928 18:17:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:18.928 18:17:11 -- pm/common@44 -- $ pid=1274 00:03:18.928 18:17:11 -- pm/common@50 -- $ kill -TERM 1274 00:03:18.928 18:17:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:18.928 18:17:11 -- nvmf/common.sh@7 -- # uname -s 00:03:18.928 18:17:11 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:18.928 18:17:11 -- nvmf/common.sh@7 -- # return 0 00:03:18.928 18:17:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:18.928 18:17:11 -- spdk/autotest.sh@32 -- # uname -s 00:03:18.928 18:17:11 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:18.928 18:17:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:18.928 18:17:11 -- pm/common@17 -- # local monitor 00:03:18.928 18:17:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.928 18:17:11 -- pm/common@25 -- # sleep 1 00:03:18.928 18:17:11 -- pm/common@21 -- # date +%s 00:03:18.928 18:17:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721067431 00:03:18.928 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721067431_collect-vmstat.pm.log 00:03:20.304 18:17:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.304 18:17:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:20.304 18:17:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:20.304 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:03:20.304 18:17:12 -- spdk/autotest.sh@59 -- # create_test_list 00:03:20.304 18:17:12 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:20.304 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:03:20.304 18:17:12 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:20.304 18:17:12 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:20.304 18:17:12 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:20.304 18:17:12 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:20.304 18:17:12 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:20.304 18:17:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:20.305 18:17:12 -- common/autotest_common.sh@1455 -- # uname 00:03:20.305 18:17:12 -- common/autotest_common.sh@1455 -- # '[' FreeBSD = FreeBSD ']' 00:03:20.305 18:17:12 -- common/autotest_common.sh@1456 -- # kldunload contigmem.ko 00:03:20.305 kldunload: can't find file contigmem.ko 00:03:20.305 18:17:12 -- common/autotest_common.sh@1456 -- # true 00:03:20.305 18:17:12 -- common/autotest_common.sh@1457 -- # '[' -n '' ']' 00:03:20.305 18:17:12 -- common/autotest_common.sh@1463 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:20.305 18:17:12 -- common/autotest_common.sh@1464 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:20.305 18:17:12 -- common/autotest_common.sh@1465 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:20.305 18:17:12 -- common/autotest_common.sh@1466 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:20.305 18:17:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.305 18:17:12 -- common/autotest_common.sh@1475 -- # uname 00:03:20.305 18:17:12 -- common/autotest_common.sh@1475 -- # [[ FreeBSD = FreeBSD ]] 00:03:20.305 18:17:12 -- common/autotest_common.sh@1475 -- # sysctl -n kern.ipc.maxsockbuf 00:03:20.305 18:17:12 -- common/autotest_common.sh@1475 -- # (( 2097152 < 4194304 )) 00:03:20.305 18:17:12 -- common/autotest_common.sh@1476 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:20.305 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:20.305 18:17:12 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:20.305 18:17:12 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:20.305 18:17:12 -- spdk/autotest.sh@72 -- # hash lcov 00:03:20.305 /home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:03:20.305 18:17:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:20.305 18:17:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:20.305 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:03:20.305 18:17:12 -- spdk/autotest.sh@91 -- # rm -f 00:03:20.305 18:17:12 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:20.305 kldunload: can't find file contigmem.ko 00:03:20.305 kldunload: can't find file nic_uio.ko 00:03:20.305 18:17:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:20.305 18:17:12 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:20.305 18:17:12 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:20.305 18:17:12 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:20.305 18:17:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:20.305 18:17:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:20.305 18:17:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:20.305 18:17:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:03:20.305 18:17:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:03:20.305 18:17:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:20.305 nvme0ns1 is not a block device 00:03:20.305 18:17:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:20.305 /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:03:20.305 18:17:12 -- scripts/common.sh@391 -- # pt= 00:03:20.305 18:17:12 -- scripts/common.sh@392 -- # return 1 00:03:20.305 18:17:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:20.305 1+0 records in 00:03:20.305 1+0 records out 00:03:20.305 1048576 bytes transferred in 0.006294 secs (166604251 bytes/sec) 00:03:20.305 18:17:12 -- spdk/autotest.sh@118 -- # sync 00:03:20.872 18:17:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:20.872 18:17:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:20.872 18:17:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:21.814 18:17:13 -- spdk/autotest.sh@124 -- # uname -s 00:03:21.814 18:17:13 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:03:21.814 18:17:13 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:21.814 Contigmem (not present) 00:03:21.814 Buffer Size: not set 00:03:21.814 Num Buffers: not set 00:03:21.814 00:03:21.814 00:03:21.814 Type BDF Vendor Device Driver 00:03:21.814 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:03:21.814 18:17:13 -- spdk/autotest.sh@130 -- # uname -s 00:03:21.814 18:17:13 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:03:21.814 18:17:13 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:21.814 18:17:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:21.814 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:03:21.814 18:17:13 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:21.814 18:17:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:21.814 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:03:21.814 18:17:13 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:21.814 kldunload: can't find file nic_uio.ko 00:03:21.814 hw.nic_uio.bdfs="0:16:0" 00:03:21.814 hw.contigmem.num_buffers="8" 00:03:21.814 hw.contigmem.buffer_size="268435456" 00:03:22.382 18:17:14 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:22.382 18:17:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:22.382 18:17:14 -- common/autotest_common.sh@10 -- # set +x 00:03:22.382 18:17:14 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:22.382 18:17:14 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:22.382 18:17:14 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:22.382 18:17:14 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:22.382 18:17:14 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:22.382 18:17:14 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:22.382 18:17:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:22.382 18:17:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:22.382 18:17:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:22.382 18:17:14 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:22.382 18:17:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:22.645 18:17:14 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:22.645 18:17:14 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:03:22.645 18:17:14 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:22.645 18:17:14 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:22.645 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:03:22.645 18:17:14 -- common/autotest_common.sh@1580 -- # device= 00:03:22.645 18:17:14 -- common/autotest_common.sh@1580 -- # true 00:03:22.645 18:17:14 -- common/autotest_common.sh@1581 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:22.645 18:17:14 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:22.645 18:17:14 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:22.645 18:17:14 -- common/autotest_common.sh@1593 -- # return 0 00:03:22.645 18:17:14 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:03:22.645 18:17:14 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:22.645 18:17:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.645 18:17:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.645 18:17:14 -- common/autotest_common.sh@10 -- # set +x 00:03:22.645 ************************************ 00:03:22.645 START TEST unittest 00:03:22.645 ************************************ 00:03:22.645 18:17:14 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:22.645 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:22.645 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:03:22.645 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:03:22.645 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:22.645 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:22.645 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:22.645 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:22.645 ++ rpc_py=rpc_cmd 00:03:22.645 ++ set -e 00:03:22.645 ++ shopt -s nullglob 00:03:22.645 ++ shopt -s extglob 00:03:22.645 ++ shopt -s inherit_errexit 00:03:22.645 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:03:22.645 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:22.645 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:22.645 +++ CONFIG_WPDK_DIR= 00:03:22.645 +++ CONFIG_ASAN=n 00:03:22.645 +++ CONFIG_VBDEV_COMPRESS=n 00:03:22.645 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:22.645 +++ CONFIG_USDT=n 00:03:22.645 +++ CONFIG_CUSTOMOCF=n 00:03:22.645 +++ CONFIG_PREFIX=/usr/local 00:03:22.645 +++ CONFIG_RBD=n 00:03:22.645 +++ CONFIG_LIBDIR= 00:03:22.645 +++ CONFIG_IDXD=y 00:03:22.645 +++ CONFIG_NVME_CUSE=n 00:03:22.645 +++ CONFIG_SMA=n 00:03:22.646 +++ CONFIG_VTUNE=n 00:03:22.646 +++ CONFIG_TSAN=n 00:03:22.646 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:22.646 +++ CONFIG_VFIO_USER_DIR= 00:03:22.646 +++ CONFIG_PGO_CAPTURE=n 00:03:22.646 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:22.646 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:22.646 +++ CONFIG_LTO=n 00:03:22.646 +++ CONFIG_ISCSI_INITIATOR=n 00:03:22.646 +++ CONFIG_CET=n 00:03:22.646 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:22.646 +++ CONFIG_OCF_PATH= 00:03:22.646 +++ CONFIG_RDMA_SET_TOS=y 00:03:22.646 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:22.646 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:22.646 +++ CONFIG_UBLK=n 00:03:22.646 +++ CONFIG_ISAL_CRYPTO=y 00:03:22.646 +++ CONFIG_OPENSSL_PATH= 00:03:22.646 +++ CONFIG_OCF=n 00:03:22.646 +++ CONFIG_FUSE=n 00:03:22.646 +++ CONFIG_VTUNE_DIR= 00:03:22.646 +++ CONFIG_FUZZER_LIB= 00:03:22.646 +++ CONFIG_FUZZER=n 00:03:22.646 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:22.646 +++ CONFIG_CRYPTO=n 00:03:22.646 +++ CONFIG_PGO_USE=n 00:03:22.646 +++ CONFIG_VHOST=n 00:03:22.646 +++ CONFIG_DAOS=n 00:03:22.646 +++ CONFIG_DPDK_INC_DIR= 00:03:22.646 +++ CONFIG_DAOS_DIR= 00:03:22.646 +++ CONFIG_UNIT_TESTS=y 00:03:22.646 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:22.646 +++ CONFIG_VIRTIO=n 00:03:22.646 +++ CONFIG_DPDK_UADK=n 00:03:22.646 +++ CONFIG_COVERAGE=n 00:03:22.646 +++ CONFIG_RDMA=y 00:03:22.646 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:22.646 +++ CONFIG_URING_PATH= 00:03:22.646 +++ CONFIG_XNVME=n 00:03:22.646 +++ CONFIG_VFIO_USER=n 00:03:22.646 +++ CONFIG_ARCH=native 00:03:22.646 +++ CONFIG_HAVE_EVP_MAC=y 00:03:22.646 +++ CONFIG_URING_ZNS=n 00:03:22.646 +++ CONFIG_WERROR=y 00:03:22.646 +++ CONFIG_HAVE_LIBBSD=n 00:03:22.646 +++ CONFIG_UBSAN=n 00:03:22.646 +++ CONFIG_IPSEC_MB_DIR= 00:03:22.646 +++ CONFIG_GOLANG=n 00:03:22.646 +++ CONFIG_ISAL=y 00:03:22.646 +++ CONFIG_IDXD_KERNEL=n 00:03:22.646 +++ CONFIG_DPDK_LIB_DIR= 00:03:22.646 +++ CONFIG_RDMA_PROV=verbs 00:03:22.646 +++ CONFIG_APPS=y 00:03:22.646 +++ CONFIG_SHARED=n 00:03:22.646 +++ CONFIG_HAVE_KEYUTILS=n 00:03:22.646 +++ CONFIG_FC_PATH= 00:03:22.646 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:22.646 +++ CONFIG_FC=n 00:03:22.646 +++ CONFIG_AVAHI=n 00:03:22.646 +++ CONFIG_FIO_PLUGIN=y 00:03:22.646 +++ CONFIG_RAID5F=n 00:03:22.646 +++ CONFIG_EXAMPLES=y 00:03:22.646 +++ CONFIG_TESTS=y 00:03:22.646 +++ CONFIG_CRYPTO_MLX5=n 00:03:22.646 +++ CONFIG_MAX_LCORES=128 00:03:22.646 +++ CONFIG_IPSEC_MB=n 00:03:22.646 +++ CONFIG_PGO_DIR= 00:03:22.646 +++ CONFIG_DEBUG=y 00:03:22.646 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:22.646 +++ CONFIG_CROSS_PREFIX= 00:03:22.646 +++ CONFIG_URING=n 00:03:22.646 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:22.646 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:22.646 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:03:22.646 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:03:22.646 +++ _root=/home/vagrant/spdk_repo/spdk 00:03:22.646 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:03:22.646 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:03:22.646 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:03:22.646 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:22.646 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:22.646 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:22.646 +++ VHOST_APP=("$_app_dir/vhost") 00:03:22.646 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:22.646 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:22.646 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:22.646 +++ [[ #ifndef SPDK_CONFIG_H 00:03:22.646 #define SPDK_CONFIG_H 00:03:22.646 #define SPDK_CONFIG_APPS 1 00:03:22.646 #define SPDK_CONFIG_ARCH native 00:03:22.646 #undef SPDK_CONFIG_ASAN 00:03:22.646 #undef SPDK_CONFIG_AVAHI 00:03:22.646 #undef SPDK_CONFIG_CET 00:03:22.646 #undef SPDK_CONFIG_COVERAGE 00:03:22.646 #define SPDK_CONFIG_CROSS_PREFIX 00:03:22.646 #undef SPDK_CONFIG_CRYPTO 00:03:22.646 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:22.646 #undef SPDK_CONFIG_CUSTOMOCF 00:03:22.646 #undef SPDK_CONFIG_DAOS 00:03:22.646 #define SPDK_CONFIG_DAOS_DIR 00:03:22.646 #define SPDK_CONFIG_DEBUG 1 00:03:22.646 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:22.646 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:22.646 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:22.646 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:22.646 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:22.646 #undef SPDK_CONFIG_DPDK_UADK 00:03:22.646 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:22.646 #define SPDK_CONFIG_EXAMPLES 1 00:03:22.646 #undef SPDK_CONFIG_FC 00:03:22.646 #define SPDK_CONFIG_FC_PATH 00:03:22.646 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:22.646 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:22.646 #undef SPDK_CONFIG_FUSE 00:03:22.646 #undef SPDK_CONFIG_FUZZER 00:03:22.646 #define SPDK_CONFIG_FUZZER_LIB 00:03:22.646 #undef SPDK_CONFIG_GOLANG 00:03:22.646 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:22.646 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:03:22.646 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:22.646 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:03:22.646 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:22.646 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:22.646 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:22.646 #define SPDK_CONFIG_IDXD 1 00:03:22.646 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:22.646 #undef SPDK_CONFIG_IPSEC_MB 00:03:22.646 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:22.646 #define SPDK_CONFIG_ISAL 1 00:03:22.646 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:22.646 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:22.646 #define SPDK_CONFIG_LIBDIR 00:03:22.646 #undef SPDK_CONFIG_LTO 00:03:22.646 #define SPDK_CONFIG_MAX_LCORES 128 00:03:22.646 #undef SPDK_CONFIG_NVME_CUSE 00:03:22.646 #undef SPDK_CONFIG_OCF 00:03:22.646 #define SPDK_CONFIG_OCF_PATH 00:03:22.646 #define SPDK_CONFIG_OPENSSL_PATH 00:03:22.646 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:22.646 #define SPDK_CONFIG_PGO_DIR 00:03:22.646 #undef SPDK_CONFIG_PGO_USE 00:03:22.646 #define SPDK_CONFIG_PREFIX /usr/local 00:03:22.646 #undef SPDK_CONFIG_RAID5F 00:03:22.646 #undef SPDK_CONFIG_RBD 00:03:22.646 #define SPDK_CONFIG_RDMA 1 00:03:22.646 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:22.646 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:22.646 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:22.646 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:22.646 #undef SPDK_CONFIG_SHARED 00:03:22.646 #undef SPDK_CONFIG_SMA 00:03:22.646 #define SPDK_CONFIG_TESTS 1 00:03:22.646 #undef SPDK_CONFIG_TSAN 00:03:22.646 #undef SPDK_CONFIG_UBLK 00:03:22.646 #undef SPDK_CONFIG_UBSAN 00:03:22.646 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:22.646 #undef SPDK_CONFIG_URING 00:03:22.646 #define SPDK_CONFIG_URING_PATH 00:03:22.646 #undef SPDK_CONFIG_URING_ZNS 00:03:22.646 #undef SPDK_CONFIG_USDT 00:03:22.646 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:22.646 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:22.646 #undef SPDK_CONFIG_VFIO_USER 00:03:22.646 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:22.646 #undef SPDK_CONFIG_VHOST 00:03:22.646 #undef SPDK_CONFIG_VIRTIO 00:03:22.646 #undef SPDK_CONFIG_VTUNE 00:03:22.646 #define SPDK_CONFIG_VTUNE_DIR 00:03:22.646 #define SPDK_CONFIG_WERROR 1 00:03:22.646 #define SPDK_CONFIG_WPDK_DIR 00:03:22.646 #undef SPDK_CONFIG_XNVME 00:03:22.646 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:22.647 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:22.647 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:22.647 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:22.647 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.647 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.647 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:22.647 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:22.647 ++++ export PATH 00:03:22.647 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:22.647 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:22.647 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:22.647 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:22.647 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:22.647 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:22.647 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:03:22.647 +++ TEST_TAG=N/A 00:03:22.647 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:22.647 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:03:22.647 ++++ uname -s 00:03:22.647 +++ PM_OS=FreeBSD 00:03:22.647 +++ MONITOR_RESOURCES_SUDO=() 00:03:22.647 +++ declare -A MONITOR_RESOURCES_SUDO 00:03:22.647 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:03:22.647 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:03:22.647 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:03:22.647 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:03:22.647 +++ SUDO[0]= 00:03:22.647 +++ SUDO[1]='sudo -E' 00:03:22.647 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:03:22.647 +++ [[ FreeBSD == FreeBSD ]] 00:03:22.647 +++ MONITOR_RESOURCES=(collect-vmstat) 00:03:22.647 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:03:22.647 ++ : 0 00:03:22.647 ++ export RUN_NIGHTLY 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_RUN_VALGRIND 00:03:22.647 ++ : 1 00:03:22.647 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:22.647 ++ : 1 00:03:22.647 ++ export SPDK_TEST_UNITTEST 00:03:22.647 ++ : 00:03:22.647 ++ export SPDK_TEST_AUTOBUILD 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_RELEASE_BUILD 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_ISAL 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_ISCSI 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:22.647 ++ : 1 00:03:22.647 ++ export SPDK_TEST_NVME 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_NVME_PMR 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_NVME_BP 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_NVME_CLI 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_NVME_CUSE 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_NVME_FDP 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_NVMF 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_VFIOUSER 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_FUZZER 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_FUZZER_SHORT 00:03:22.647 ++ : rdma 00:03:22.647 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_RBD 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_VHOST 00:03:22.647 ++ : 1 00:03:22.647 ++ export SPDK_TEST_BLOCKDEV 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_IOAT 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_BLOBFS 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_VHOST_INIT 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_LVOL 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_RUN_ASAN 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_RUN_UBSAN 00:03:22.647 ++ : 00:03:22.647 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_RUN_NON_ROOT 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_CRYPTO 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_FTL 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_OCF 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_VMD 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_OPAL 00:03:22.647 ++ : 00:03:22.647 ++ export SPDK_TEST_NATIVE_DPDK 00:03:22.647 ++ : true 00:03:22.647 ++ export SPDK_AUTOTEST_X 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_RAID5 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_URING 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_USDT 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_USE_IGB_UIO 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_SCHEDULER 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_SCANBUILD 00:03:22.647 ++ : 00:03:22.647 ++ export SPDK_TEST_NVMF_NICS 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_SMA 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_DAOS 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_XNVME 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_ACCEL_DSA 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_ACCEL_IAA 00:03:22.647 ++ : 00:03:22.647 ++ export SPDK_TEST_FUZZER_TARGET 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_TEST_NVMF_MDNS 00:03:22.647 ++ : 0 00:03:22.647 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:22.647 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:22.647 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:22.647 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:22.647 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:22.647 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:22.647 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:22.647 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:22.647 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:22.647 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:22.647 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:22.647 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:22.647 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:22.647 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:22.647 ++ PYTHONDONTWRITEBYTECODE=1 00:03:22.647 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:22.647 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:22.647 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:22.647 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:22.647 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:22.647 ++ rm -rf /var/tmp/asan_suppression_file 00:03:22.647 ++ cat 00:03:22.648 ++ echo leak:libfuse3.so 00:03:22.648 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:22.648 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:22.648 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:22.648 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:22.648 ++ '[' -z /var/spdk/dependencies ']' 00:03:22.648 ++ export DEPENDENCY_DIR 00:03:22.648 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:22.648 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:22.648 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:22.648 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:22.648 ++ export QEMU_BIN= 00:03:22.648 ++ QEMU_BIN= 00:03:22.648 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:22.648 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:22.648 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:22.648 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:22.648 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:22.648 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:22.648 ++ '[' 0 -eq 0 ']' 00:03:22.648 ++ export valgrind= 00:03:22.648 ++ valgrind= 00:03:22.648 +++ uname -s 00:03:22.648 ++ '[' FreeBSD = Linux ']' 00:03:22.648 +++ uname -s 00:03:22.648 ++ '[' FreeBSD = FreeBSD ']' 00:03:22.648 ++ MAKE=gmake 00:03:22.648 +++ sysctl -a 00:03:22.648 +++ grep -E -i hw.ncpu 00:03:22.648 +++ awk '{print $2}' 00:03:22.648 ++ MAKEFLAGS=-j10 00:03:22.648 ++ HUGEMEM=2048 00:03:22.648 ++ export HUGEMEM=2048 00:03:22.648 ++ HUGEMEM=2048 00:03:22.648 ++ NO_HUGE=() 00:03:22.648 ++ TEST_MODE= 00:03:22.648 ++ [[ -z '' ]] 00:03:22.648 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:22.648 ++ exec 00:03:22.648 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:22.648 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:22.648 ++ set_test_storage 2147483648 00:03:22.648 ++ [[ -v testdir ]] 00:03:22.648 ++ local requested_size=2147483648 00:03:22.648 ++ local mount target_dir 00:03:22.648 ++ local -A mounts fss sizes avails uses 00:03:22.648 ++ local source fs size avail mount use 00:03:22.648 ++ local storage_fallback storage_candidates 00:03:22.648 +++ mktemp -udt spdk.XXXXXX 00:03:22.648 ++ storage_fallback=/tmp/spdk.XXXXXX.6ovVFGEXB7 00:03:22.648 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:22.648 ++ [[ -n '' ]] 00:03:22.648 ++ [[ -n '' ]] 00:03:22.648 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.6ovVFGEXB7/tests/unit /tmp/spdk.XXXXXX.6ovVFGEXB7 00:03:22.648 ++ requested_size=2214592512 00:03:22.648 ++ read -r source fs size use avail _ mount 00:03:22.648 +++ df -T 00:03:22.648 +++ grep -v Filesystem 00:03:22.648 ++ mounts["$mount"]=/dev/gptid/043e6f36-2a13-11ef-a525-001e676338ce 00:03:22.648 ++ fss["$mount"]=ufs 00:03:22.648 ++ avails["$mount"]=17235144704 00:03:22.648 ++ sizes["$mount"]=31182712832 00:03:22.648 ++ uses["$mount"]=11452952576 00:03:22.648 ++ read -r source fs size use avail _ mount 00:03:22.648 ++ mounts["$mount"]=devfs 00:03:22.648 ++ fss["$mount"]=devfs 00:03:22.648 ++ avails["$mount"]=1024 00:03:22.648 ++ sizes["$mount"]=1024 00:03:22.648 ++ uses["$mount"]=0 00:03:22.648 ++ read -r source fs size use avail _ mount 00:03:22.648 ++ mounts["$mount"]=tmpfs 00:03:22.648 ++ fss["$mount"]=tmpfs 00:03:22.648 ++ avails["$mount"]=2147442688 00:03:22.648 ++ sizes["$mount"]=2147483648 00:03:22.648 ++ uses["$mount"]=40960 00:03:22.648 ++ read -r source fs size use avail _ mount 00:03:22.648 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output 00:03:22.648 ++ fss["$mount"]=fusefs.sshfs 00:03:22.648 ++ avails["$mount"]=92778512384 00:03:22.648 ++ sizes["$mount"]=105088212992 00:03:22.648 ++ uses["$mount"]=6924267520 00:03:22.648 ++ read -r source fs size use avail _ mount 00:03:22.648 ++ printf '* Looking for test storage...\n' 00:03:22.648 * Looking for test storage... 00:03:22.648 ++ local target_space new_size 00:03:22.648 ++ for target_dir in "${storage_candidates[@]}" 00:03:22.648 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:03:22.648 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:22.648 ++ mount=/ 00:03:22.648 ++ target_space=17235144704 00:03:22.648 ++ (( target_space == 0 || target_space < requested_size )) 00:03:22.648 ++ (( target_space >= requested_size )) 00:03:22.648 ++ [[ ufs == tmpfs ]] 00:03:22.648 ++ [[ ufs == ramfs ]] 00:03:22.648 ++ [[ / == / ]] 00:03:22.648 ++ new_size=13667545088 00:03:22.648 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:22.648 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:22.648 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:22.648 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:03:22.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:03:22.648 ++ return 0 00:03:22.648 ++ set -o errtrace 00:03:22.648 ++ shopt -s extdebug 00:03:22.648 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:22.648 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@1687 -- # true 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@29 -- # exec 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@18 -- # set -x 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@181 -- # hash lcov 00:03:22.648 /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@208 -- # uname -m 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:03:22.648 18:17:14 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.648 18:17:14 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:22.648 ************************************ 00:03:22.648 START TEST unittest_pci_event 00:03:22.648 ************************************ 00:03:22.648 18:17:14 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:22.648 00:03:22.648 00:03:22.648 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.648 http://cunit.sourceforge.net/ 00:03:22.648 00:03:22.648 00:03:22.648 Suite: pci_event 00:03:22.648 Test: test_pci_parse_event ...passed 00:03:22.648 00:03:22.649 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.649 suites 1 1 n/a 0 0 00:03:22.649 tests 1 1 1 0 0 00:03:22.649 asserts 1 1 1 0 n/a 00:03:22.649 00:03:22.649 Elapsed time = 0.000 seconds 00:03:22.649 00:03:22.649 real 0m0.019s 00:03:22.649 user 0m0.000s 00:03:22.649 sys 0m0.017s 00:03:22.649 18:17:14 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.649 18:17:14 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:03:22.649 ************************************ 00:03:22.649 END TEST unittest_pci_event 00:03:22.649 ************************************ 00:03:22.910 18:17:15 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:22.910 18:17:15 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:22.910 18:17:15 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.910 18:17:15 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.910 18:17:15 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:22.910 ************************************ 00:03:22.910 START TEST unittest_include 00:03:22.910 ************************************ 00:03:22.910 18:17:15 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:22.910 00:03:22.910 00:03:22.910 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.910 http://cunit.sourceforge.net/ 00:03:22.910 00:03:22.910 00:03:22.910 Suite: histogram 00:03:22.910 Test: histogram_test ...passed 00:03:22.910 Test: histogram_merge ...passed 00:03:22.910 00:03:22.910 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.910 suites 1 1 n/a 0 0 00:03:22.910 tests 2 2 2 0 0 00:03:22.910 asserts 50 50 50 0 n/a 00:03:22.910 00:03:22.910 Elapsed time = 0.000 seconds 00:03:22.910 00:03:22.910 real 0m0.006s 00:03:22.910 user 0m0.000s 00:03:22.910 sys 0m0.007s 00:03:22.910 18:17:15 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.910 18:17:15 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:03:22.910 ************************************ 00:03:22.910 END TEST unittest_include 00:03:22.910 ************************************ 00:03:22.910 18:17:15 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:22.910 18:17:15 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:03:22.910 18:17:15 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.910 18:17:15 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.910 18:17:15 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:22.910 ************************************ 00:03:22.910 START TEST unittest_bdev 00:03:22.910 ************************************ 00:03:22.910 18:17:15 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:03:22.910 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:22.910 00:03:22.910 00:03:22.910 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.910 http://cunit.sourceforge.net/ 00:03:22.910 00:03:22.910 00:03:22.910 Suite: bdev 00:03:22.910 Test: bytes_to_blocks_test ...passed 00:03:22.910 Test: num_blocks_test ...passed 00:03:22.910 Test: io_valid_test ...passed 00:03:22.910 Test: open_write_test ...[2024-07-15 18:17:15.074323] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:22.910 [2024-07-15 18:17:15.074576] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:22.910 [2024-07-15 18:17:15.074602] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:22.910 passed 00:03:22.910 Test: claim_test ...passed 00:03:22.910 Test: alias_add_del_test ...[2024-07-15 18:17:15.077635] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:22.910 [2024-07-15 18:17:15.077678] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:22.910 [2024-07-15 18:17:15.077696] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:22.910 passed 00:03:22.910 Test: get_device_stat_test ...passed 00:03:22.910 Test: bdev_io_types_test ...passed 00:03:22.910 Test: bdev_io_wait_test ...passed 00:03:22.910 Test: bdev_io_spans_split_test ...passed 00:03:22.910 Test: bdev_io_boundary_split_test ...passed 00:03:22.910 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-15 18:17:15.084880] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:22.910 passed 00:03:22.910 Test: bdev_io_mix_split_test ...passed 00:03:22.910 Test: bdev_io_split_with_io_wait ...passed 00:03:22.910 Test: bdev_io_write_unit_split_test ...[2024-07-15 18:17:15.090080] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:22.910 [2024-07-15 18:17:15.090140] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:22.910 [2024-07-15 18:17:15.090162] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:22.910 [2024-07-15 18:17:15.090182] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:22.910 passed 00:03:22.910 Test: bdev_io_alignment_with_boundary ...passed 00:03:22.910 Test: bdev_io_alignment ...passed 00:03:22.910 Test: bdev_histograms ...passed 00:03:22.910 Test: bdev_write_zeroes ...passed 00:03:22.910 Test: bdev_compare_and_write ...passed 00:03:22.910 Test: bdev_compare ...passed 00:03:22.910 Test: bdev_compare_emulated ...passed 00:03:22.910 Test: bdev_zcopy_write ...passed 00:03:22.910 Test: bdev_zcopy_read ...passed 00:03:22.910 Test: bdev_open_while_hotremove ...passed 00:03:22.910 Test: bdev_close_while_hotremove ...passed 00:03:22.910 Test: bdev_open_ext_test ...[2024-07-15 18:17:15.109595] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:22.910 passed 00:03:22.910 Test: bdev_open_ext_unregister ...passed 00:03:22.910 Test: bdev_set_io_timeout ...[2024-07-15 18:17:15.109691] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:22.910 passed 00:03:22.910 Test: bdev_set_qd_sampling ...passed 00:03:22.910 Test: lba_range_overlap ...passed 00:03:22.910 Test: lock_lba_range_check_ranges ...passed 00:03:22.910 Test: lock_lba_range_with_io_outstanding ...passed 00:03:22.910 Test: lock_lba_range_overlapped ...passed 00:03:22.910 Test: bdev_quiesce ...[2024-07-15 18:17:15.119123] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:22.910 passed 00:03:22.910 Test: bdev_io_abort ...passed 00:03:22.910 Test: bdev_unmap ...passed 00:03:22.910 Test: bdev_write_zeroes_split_test ...passed 00:03:22.910 Test: bdev_set_options_test ...[2024-07-15 18:17:15.124637] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:22.910 passed 00:03:22.910 Test: bdev_get_memory_domains ...passed 00:03:22.910 Test: bdev_io_ext ...passed 00:03:22.910 Test: bdev_io_ext_no_opts ...passed 00:03:22.910 Test: bdev_io_ext_invalid_opts ...passed 00:03:22.910 Test: bdev_io_ext_split ...passed 00:03:22.910 Test: bdev_io_ext_bounce_buffer ...passed 00:03:22.910 Test: bdev_register_uuid_alias ...[2024-07-15 18:17:15.134137] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 768fbec4-42d6-11ef-9ade-d5fc5159efa5 already exists 00:03:22.910 [2024-07-15 18:17:15.134208] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:768fbec4-42d6-11ef-9ade-d5fc5159efa5 alias for bdev bdev0 00:03:22.910 passed 00:03:22.910 Test: bdev_unregister_by_name ...passed[2024-07-15 18:17:15.134689] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:22.911 [2024-07-15 18:17:15.134714] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7983:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:22.911 00:03:22.911 Test: for_each_bdev_test ...passed 00:03:22.911 Test: bdev_seek_test ...passed 00:03:22.911 Test: bdev_copy ...passed 00:03:22.911 Test: bdev_copy_split_test ...passed 00:03:22.911 Test: examine_locks ...passed 00:03:22.911 Test: claim_v2_rwo ...[2024-07-15 18:17:15.139877] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.139928] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.139947] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.139962] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.139976] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:22.911 passed 00:03:22.911 Test: claim_v2_rom ...[2024-07-15 18:17:15.139993] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8704:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:22.911 [2024-07-15 18:17:15.140056] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140267] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140289] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140305] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140327] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:22.911 [2024-07-15 18:17:15.140346] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:22.911 passed 00:03:22.911 Test: claim_v2_rwm ...[2024-07-15 18:17:15.140404] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8777:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:22.911 [2024-07-15 18:17:15.140432] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140449] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140464] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140478] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140493] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140511] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8777:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:22.911 passed 00:03:22.911 Test: claim_v2_existing_writer ...[2024-07-15 18:17:15.140552] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:22.911 [2024-07-15 18:17:15.140570] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:22.911 passed 00:03:22.911 Test: claim_v2_existing_v1 ...[2024-07-15 18:17:15.140606] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140622] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140636] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:22.911 passed 00:03:22.911 Test: claim_v1_existing_v2 ...[2024-07-15 18:17:15.140670] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140688] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:22.911 [2024-07-15 18:17:15.140704] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:22.911 passed 00:03:22.911 Test: examine_claimed ...[2024-07-15 18:17:15.140770] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:22.911 passed 00:03:22.911 00:03:22.911 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.911 suites 1 1 n/a 0 0 00:03:22.911 tests 59 59 59 0 0 00:03:22.911 asserts 4599 4599 4599 0 n/a 00:03:22.911 00:03:22.911 Elapsed time = 0.070 seconds 00:03:22.911 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:22.911 00:03:22.911 00:03:22.911 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.911 http://cunit.sourceforge.net/ 00:03:22.911 00:03:22.911 00:03:22.911 Suite: nvme 00:03:22.911 Test: test_create_ctrlr ...passed 00:03:22.911 Test: test_reset_ctrlr ...[2024-07-15 18:17:15.152844] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 passed 00:03:22.911 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:22.911 Test: test_failover_ctrlr ...passed 00:03:22.911 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:03:22.911 Test: test_pending_reset ...[2024-07-15 18:17:15.153417] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.153465] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.153498] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.153713] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.153761] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 passed 00:03:22.911 Test: test_attach_ctrlr ...[2024-07-15 18:17:15.153871] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:22.911 passed 00:03:22.911 Test: test_aer_cb ...passed 00:03:22.911 Test: test_submit_nvme_cmd ...passed 00:03:22.911 Test: test_add_remove_trid ...passed 00:03:22.911 Test: test_abort ...[2024-07-15 18:17:15.154250] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7480:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:22.911 passed 00:03:22.911 Test: test_get_io_qpair ...passed 00:03:22.911 Test: test_bdev_unregister ...passed 00:03:22.911 Test: test_compare_ns ...passed 00:03:22.911 Test: test_init_ana_log_page ...passed 00:03:22.911 Test: test_get_memory_domains ...passed 00:03:22.911 Test: test_reconnect_qpair ...passed 00:03:22.911 Test: test_create_bdev_ctrlr ...[2024-07-15 18:17:15.154611] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.154717] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5407:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:22.911 passed 00:03:22.911 Test: test_add_multi_ns_to_bdev ...[2024-07-15 18:17:15.154894] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:22.911 passed 00:03:22.911 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:03:22.911 Test: test_admin_path ...passed 00:03:22.911 Test: test_reset_bdev_ctrlr ...passed 00:03:22.911 Test: test_find_io_path ...passed 00:03:22.911 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:22.911 Test: test_retry_io_for_io_path_error ...passed 00:03:22.911 Test: test_retry_io_count ...passed 00:03:22.911 Test: test_concurrent_read_ana_log_page ...passed 00:03:22.911 Test: test_retry_io_for_ana_error ...passed 00:03:22.911 Test: test_check_io_error_resiliency_params ...[2024-07-15 18:17:15.155732] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:22.911 [2024-07-15 18:17:15.155760] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6108:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:22.911 [2024-07-15 18:17:15.155776] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6117:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:22.911 [2024-07-15 18:17:15.155790] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6120:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:22.911 [2024-07-15 18:17:15.155805] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:22.911 [2024-07-15 18:17:15.155827] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:22.911 [2024-07-15 18:17:15.155843] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:22.911 [2024-07-15 18:17:15.155858] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6127:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:22.911 [2024-07-15 18:17:15.155872] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6124:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:22.911 passed 00:03:22.911 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:03:22.911 Test: test_reconnect_ctrlr ...[2024-07-15 18:17:15.156010] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.156064] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.156085] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.156097] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.156108] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 passed 00:03:22.911 Test: test_retry_failover_ctrlr ...[2024-07-15 18:17:15.156141] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 passed 00:03:22.911 Test: test_fail_path ...[2024-07-15 18:17:15.156193] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.911 [2024-07-15 18:17:15.156212] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.912 [2024-07-15 18:17:15.156224] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.912 [2024-07-15 18:17:15.156235] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.912 [2024-07-15 18:17:15.156245] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.912 passed 00:03:22.912 Test: test_nvme_ns_cmp ...passed 00:03:22.912 Test: test_ana_transition ...passed 00:03:22.912 Test: test_set_preferred_path ...passed 00:03:22.912 Test: test_find_next_io_path ...passed 00:03:22.912 Test: test_find_io_path_min_qd ...passed 00:03:22.912 Test: test_disable_auto_failback ...[2024-07-15 18:17:15.156371] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.912 passed 00:03:22.912 Test: test_set_multipath_policy ...passed 00:03:22.912 Test: test_uuid_generation ...passed 00:03:22.912 Test: test_retry_io_to_same_path ...passed 00:03:22.912 Test: test_race_between_reset_and_disconnected ...passed 00:03:22.912 Test: test_ctrlr_op_rpc ...passed 00:03:22.912 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:22.912 Test: test_disable_enable_ctrlr ...passed 00:03:22.912 Test: test_delete_ctrlr_done ...passed 00:03:22.912 Test: test_ns_remove_during_reset ...[2024-07-15 18:17:15.198142] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.912 [2024-07-15 18:17:15.198247] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:22.912 passed 00:03:22.912 Test: test_io_path_is_current ...passed 00:03:22.912 00:03:22.912 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.912 suites 1 1 n/a 0 0 00:03:22.912 tests 49 49 49 0 0 00:03:22.912 asserts 3577 3577 3577 0 n/a 00:03:22.912 00:03:22.912 Elapsed time = 0.016 seconds 00:03:22.912 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:22.912 00:03:22.912 00:03:22.912 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.912 http://cunit.sourceforge.net/ 00:03:22.912 00:03:22.912 Test Options 00:03:22.912 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:22.912 00:03:22.912 Suite: raid 00:03:22.912 Test: test_create_raid ...passed 00:03:22.912 Test: test_create_raid_superblock ...passed 00:03:22.912 Test: test_delete_raid ...passed 00:03:22.912 Test: test_create_raid_invalid_args ...[2024-07-15 18:17:15.208964] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:22.912 [2024-07-15 18:17:15.209237] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:22.912 [2024-07-15 18:17:15.209344] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:22.912 [2024-07-15 18:17:15.209388] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:22.912 [2024-07-15 18:17:15.209405] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:22.912 [2024-07-15 18:17:15.209557] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:22.912 [2024-07-15 18:17:15.209585] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:22.912 passed 00:03:22.912 Test: test_delete_raid_invalid_args ...passed 00:03:22.912 Test: test_io_channel ...passed 00:03:22.912 Test: test_reset_io ...passed 00:03:22.912 Test: test_multi_raid ...passed 00:03:22.912 Test: test_io_type_supported ...passed 00:03:22.912 Test: test_raid_json_dump_info ...passed 00:03:22.912 Test: test_context_size ...passed 00:03:22.912 Test: test_raid_level_conversions ...passed 00:03:22.912 Test: test_raid_io_split ...passed 00:03:22.912 Test: test_raid_process ...passed 00:03:22.912 00:03:22.912 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.912 suites 1 1 n/a 0 0 00:03:22.912 tests 14 14 14 0 0 00:03:22.912 asserts 6183 6183 6183 0 n/a 00:03:22.912 00:03:22.912 Elapsed time = 0.000 seconds 00:03:22.912 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:22.912 00:03:22.912 00:03:22.912 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.912 http://cunit.sourceforge.net/ 00:03:22.912 00:03:22.912 00:03:22.912 Suite: raid_sb 00:03:22.912 Test: test_raid_bdev_write_superblock ...passed 00:03:22.912 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:22.912 Test: test_raid_bdev_parse_superblock ...[2024-07-15 18:17:15.217491] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:22.912 passed 00:03:22.912 Suite: raid_sb_md 00:03:22.912 Test: test_raid_bdev_write_superblock ...passed 00:03:22.912 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:22.912 Test: test_raid_bdev_parse_superblock ...passed 00:03:22.912 Suite: raid_sb_md_interleaved 00:03:22.912 Test: test_raid_bdev_write_superblock ...passed 00:03:22.912 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:22.912 Test: test_raid_bdev_parse_superblock ...passed 00:03:22.912 00:03:22.912 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.912 suites 3 3 n/a 0 0 00:03:22.912 tests 9 9 9 0 0 00:03:22.912 asserts 139 139 139 0 n/a 00:03:22.912 00:03:22.912 Elapsed time = 0.000 seconds 00:03:22.912 [2024-07-15 18:17:15.217780] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:22.912 [2024-07-15 18:17:15.217894] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:22.912 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:22.912 00:03:22.912 00:03:22.912 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.912 http://cunit.sourceforge.net/ 00:03:22.912 00:03:22.912 00:03:22.912 Suite: concat 00:03:22.912 Test: test_concat_start ...passed 00:03:22.912 Test: test_concat_rw ...passed 00:03:22.912 Test: test_concat_null_payload ...passed 00:03:22.912 00:03:22.912 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.912 suites 1 1 n/a 0 0 00:03:22.912 tests 3 3 3 0 0 00:03:22.912 asserts 8460 8460 8460 0 n/a 00:03:22.912 00:03:22.912 Elapsed time = 0.000 seconds 00:03:22.912 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:03:22.912 00:03:22.912 00:03:22.912 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.912 http://cunit.sourceforge.net/ 00:03:22.912 00:03:22.912 00:03:22.912 Suite: raid0 00:03:22.912 Test: test_write_io ...passed 00:03:22.912 Test: test_read_io ...passed 00:03:22.912 Test: test_unmap_io ...passed 00:03:22.912 Test: test_io_failure ...passed 00:03:22.912 Suite: raid0_dif 00:03:22.912 Test: test_write_io ...passed 00:03:22.912 Test: test_read_io ...passed 00:03:22.912 Test: test_unmap_io ...passed 00:03:22.912 Test: test_io_failure ...passed 00:03:22.912 00:03:22.912 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.912 suites 2 2 n/a 0 0 00:03:22.912 tests 8 8 8 0 0 00:03:22.912 asserts 368291 368291 368291 0 n/a 00:03:22.912 00:03:22.912 Elapsed time = 0.000 seconds 00:03:22.912 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:22.912 00:03:22.912 00:03:22.912 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.912 http://cunit.sourceforge.net/ 00:03:22.912 00:03:22.912 00:03:22.912 Suite: raid1 00:03:22.912 Test: test_raid1_start ...passed 00:03:22.912 Test: test_raid1_read_balancing ...passed 00:03:22.912 Test: test_raid1_write_error ...passed 00:03:22.912 Test: test_raid1_read_error ...passed 00:03:22.912 00:03:22.912 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.912 suites 1 1 n/a 0 0 00:03:22.912 tests 4 4 4 0 0 00:03:22.912 asserts 4374 4374 4374 0 n/a 00:03:22.912 00:03:22.912 Elapsed time = 0.000 seconds 00:03:22.912 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:22.912 00:03:22.912 00:03:22.912 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.912 http://cunit.sourceforge.net/ 00:03:22.912 00:03:22.912 00:03:22.912 Suite: zone 00:03:22.912 Test: test_zone_get_operation ...passed 00:03:22.912 Test: test_bdev_zone_get_info ...passed 00:03:22.912 Test: test_bdev_zone_management ...passed 00:03:22.912 Test: test_bdev_zone_append ...passed 00:03:22.912 Test: test_bdev_zone_append_with_md ...passed 00:03:22.912 Test: test_bdev_zone_appendv ...passed 00:03:22.912 Test: test_bdev_zone_appendv_with_md ...passed 00:03:22.912 Test: test_bdev_io_get_append_location ...passed 00:03:22.912 00:03:22.912 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.912 suites 1 1 n/a 0 0 00:03:22.912 tests 8 8 8 0 0 00:03:22.912 asserts 94 94 94 0 n/a 00:03:22.912 00:03:22.912 Elapsed time = 0.000 seconds 00:03:22.912 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:22.912 00:03:22.912 00:03:22.912 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.912 http://cunit.sourceforge.net/ 00:03:22.912 00:03:22.912 00:03:22.912 Suite: gpt_parse 00:03:22.912 Test: test_parse_mbr_and_primary ...[2024-07-15 18:17:15.255519] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:22.912 [2024-07-15 18:17:15.255822] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:22.912 [2024-07-15 18:17:15.255903] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:22.912 [2024-07-15 18:17:15.255938] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:22.913 [2024-07-15 18:17:15.255964] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:22.913 [2024-07-15 18:17:15.255984] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:22.913 passed 00:03:22.913 Test: test_parse_secondary ...[2024-07-15 18:17:15.256187] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:22.913 [2024-07-15 18:17:15.256201] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:22.913 [2024-07-15 18:17:15.256225] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:22.913 [2024-07-15 18:17:15.256236] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:22.913 passed 00:03:22.913 Test: test_check_mbr ...[2024-07-15 18:17:15.256374] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:22.913 [2024-07-15 18:17:15.256386] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:22.913 passed 00:03:22.913 Test: test_read_header ...[2024-07-15 18:17:15.256405] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:22.913 [2024-07-15 18:17:15.256418] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:22.913 [2024-07-15 18:17:15.256430] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:22.913 [2024-07-15 18:17:15.256443] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:22.913 [2024-07-15 18:17:15.256455] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:22.913 [2024-07-15 18:17:15.256467] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:22.913 passed 00:03:22.913 Test: test_read_partitions ...[2024-07-15 18:17:15.256483] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:22.913 [2024-07-15 18:17:15.256496] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:22.913 [2024-07-15 18:17:15.256507] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:22.913 [2024-07-15 18:17:15.256518] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:22.913 [2024-07-15 18:17:15.256587] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:22.913 passed 00:03:22.913 00:03:22.913 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.913 suites 1 1 n/a 0 0 00:03:22.913 tests 5 5 5 0 0 00:03:22.913 asserts 33 33 33 0 n/a 00:03:22.913 00:03:22.913 Elapsed time = 0.000 seconds 00:03:22.913 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:22.913 00:03:22.913 00:03:22.913 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.913 http://cunit.sourceforge.net/ 00:03:22.913 00:03:22.913 00:03:22.913 Suite: bdev_part 00:03:22.913 Test: part_test ...[2024-07-15 18:17:15.267103] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 19ba18ed-5c70-f456-819e-5507a769bf2f already exists 00:03:22.913 passed 00:03:22.913 Test: part_free_test ...[2024-07-15 18:17:15.267369] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:19ba18ed-5c70-f456-819e-5507a769bf2f alias for bdev test1 00:03:22.913 passed 00:03:23.174 Test: part_get_io_channel_test ...passed 00:03:23.174 Test: part_construct_ext ...passed 00:03:23.174 00:03:23.174 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.174 suites 1 1 n/a 0 0 00:03:23.174 tests 4 4 4 0 0 00:03:23.174 asserts 48 48 48 0 n/a 00:03:23.174 00:03:23.174 Elapsed time = 0.008 seconds 00:03:23.174 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:23.174 00:03:23.174 00:03:23.174 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.174 http://cunit.sourceforge.net/ 00:03:23.174 00:03:23.174 00:03:23.174 Suite: scsi_nvme_suite 00:03:23.174 Test: scsi_nvme_translate_test ...passed 00:03:23.174 00:03:23.174 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.174 suites 1 1 n/a 0 0 00:03:23.174 tests 1 1 1 0 0 00:03:23.174 asserts 104 104 104 0 n/a 00:03:23.174 00:03:23.174 Elapsed time = 0.000 seconds 00:03:23.174 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:23.174 00:03:23.174 00:03:23.174 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.174 http://cunit.sourceforge.net/ 00:03:23.174 00:03:23.174 00:03:23.174 Suite: lvol 00:03:23.174 Test: ut_lvs_init ...passed 00:03:23.174 Test: ut_lvol_init ...[2024-07-15 18:17:15.281008] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:23.174 [2024-07-15 18:17:15.281190] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:23.174 passed 00:03:23.174 Test: ut_lvol_snapshot ...passed 00:03:23.174 Test: ut_lvol_clone ...passed 00:03:23.174 Test: ut_lvs_destroy ...passed 00:03:23.174 Test: ut_lvs_unload ...passed 00:03:23.174 Test: ut_lvol_resize ...passed 00:03:23.174 Test: ut_lvol_set_read_only ...passed 00:03:23.174 Test: ut_lvol_hotremove ...passed 00:03:23.174 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:23.174 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:23.174 Test: ut_lvol_read_write ...passed[2024-07-15 18:17:15.281353] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:23.174 00:03:23.174 Test: ut_vbdev_lvol_submit_request ...passed 00:03:23.174 Test: ut_lvol_examine_config ...passed 00:03:23.174 Test: ut_lvol_examine_disk ...passed 00:03:23.174 Test: ut_lvol_rename ...passed 00:03:23.174 Test: ut_bdev_finish ...passed 00:03:23.174 Test: ut_lvs_rename ...passed 00:03:23.174 Test: ut_lvol_seek ...passed 00:03:23.174 Test: ut_esnap_dev_create ...[2024-07-15 18:17:15.281460] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:23.174 [2024-07-15 18:17:15.281503] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:23.174 [2024-07-15 18:17:15.281515] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:23.174 passed 00:03:23.174 Test: ut_lvol_esnap_clone_bad_args ...passed 00:03:23.174 Test: ut_lvol_shallow_copy ...passed 00:03:23.174 Test: ut_lvol_set_external_parent ...[2024-07-15 18:17:15.281583] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:23.174 [2024-07-15 18:17:15.281597] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:23.174 [2024-07-15 18:17:15.281614] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:23.174 [2024-07-15 18:17:15.281653] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:23.174 [2024-07-15 18:17:15.281666] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:23.174 [2024-07-15 18:17:15.281690] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:23.174 [2024-07-15 18:17:15.281700] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:03:23.174 [2024-07-15 18:17:15.281717] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:23.174 passed 00:03:23.174 00:03:23.174 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.174 suites 1 1 n/a 0 0 00:03:23.174 tests 23 23 23 0 0 00:03:23.174 asserts 770 770 770 0 n/a 00:03:23.174 00:03:23.174 Elapsed time = 0.000 seconds 00:03:23.174 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:23.174 00:03:23.174 00:03:23.174 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.174 http://cunit.sourceforge.net/ 00:03:23.174 00:03:23.174 00:03:23.174 Suite: zone_block 00:03:23.174 Test: test_zone_block_create ...passed 00:03:23.174 Test: test_zone_block_create_invalid ...[2024-07-15 18:17:15.294163] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:23.174 [2024-07-15 18:17:15.294439] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 18:17:15.294469] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:23.174 [2024-07-15 18:17:15.294485] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 18:17:15.294510] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:23.174 [2024-07-15 18:17:15.294527] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:03:23.174 Test: test_get_zone_info ...[2024-07-15 18:17:15.294541] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:23.174 [2024-07-15 18:17:15.294553] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-15 18:17:15.294641] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 passed 00:03:23.174 Test: test_supported_io_types ...passed 00:03:23.174 Test: test_reset_zone ...[2024-07-15 18:17:15.294668] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.294684] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 passed 00:03:23.174 Test: test_open_zone ...[2024-07-15 18:17:15.294760] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.294779] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.294834] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.295062] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.295083] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 passed 00:03:23.174 Test: test_zone_write ...[2024-07-15 18:17:15.295136] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:23.174 [2024-07-15 18:17:15.295151] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.295168] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:23.174 [2024-07-15 18:17:15.295180] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.295733] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:23.174 [2024-07-15 18:17:15.295770] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.295788] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:23.174 [2024-07-15 18:17:15.295802] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.296451] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:23.174 [2024-07-15 18:17:15.296485] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 passed 00:03:23.174 Test: test_zone_read ...[2024-07-15 18:17:15.296542] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:23.174 [2024-07-15 18:17:15.296560] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.296577] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:23.174 [2024-07-15 18:17:15.296589] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 passed 00:03:23.174 Test: test_close_zone ...[2024-07-15 18:17:15.296648] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:23.174 [2024-07-15 18:17:15.296663] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.296707] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 [2024-07-15 18:17:15.296729] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.174 passed 00:03:23.174 Test: test_finish_zone ...[2024-07-15 18:17:15.296774] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.175 [2024-07-15 18:17:15.296791] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.175 passed 00:03:23.175 Test: test_append_zone ...[2024-07-15 18:17:15.296863] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.175 [2024-07-15 18:17:15.296890] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.175 [2024-07-15 18:17:15.296934] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:23.175 [2024-07-15 18:17:15.296950] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.175 [2024-07-15 18:17:15.296966] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:23.175 [2024-07-15 18:17:15.296979] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.175 [2024-07-15 18:17:15.298149] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:23.175 [2024-07-15 18:17:15.298185] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:23.175 passed 00:03:23.175 00:03:23.175 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.175 suites 1 1 n/a 0 0 00:03:23.175 tests 11 11 11 0 0 00:03:23.175 asserts 3437 3437 3437 0 n/a 00:03:23.175 00:03:23.175 Elapsed time = 0.000 seconds 00:03:23.175 18:17:15 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:23.175 00:03:23.175 00:03:23.175 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.175 http://cunit.sourceforge.net/ 00:03:23.175 00:03:23.175 00:03:23.175 Suite: bdev 00:03:23.175 Test: basic ...[2024-07-15 18:17:15.307188] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b269): Operation not permitted (rc=-1) 00:03:23.175 [2024-07-15 18:17:15.307431] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x12035926a480 (0x24b260): Operation not permitted (rc=-1) 00:03:23.175 [2024-07-15 18:17:15.307450] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b269): Operation not permitted (rc=-1) 00:03:23.175 passed 00:03:23.175 Test: unregister_and_close ...passed 00:03:23.175 Test: unregister_and_close_different_threads ...passed 00:03:23.175 Test: basic_qos ...passed 00:03:23.175 Test: put_channel_during_reset ...passed 00:03:23.175 Test: aborted_reset ...passed 00:03:23.175 Test: aborted_reset_no_outstanding_io ...passed 00:03:23.175 Test: io_during_reset ...passed 00:03:23.175 Test: reset_completions ...passed 00:03:23.175 Test: io_during_qos_queue ...passed 00:03:23.175 Test: io_during_qos_reset ...passed 00:03:23.175 Test: enomem ...passed 00:03:23.175 Test: enomem_multi_bdev ...passed 00:03:23.175 Test: enomem_multi_bdev_unregister ...passed 00:03:23.175 Test: enomem_multi_io_target ...passed 00:03:23.175 Test: qos_dynamic_enable ...passed 00:03:23.175 Test: bdev_histograms_mt ...passed 00:03:23.175 Test: bdev_set_io_timeout_mt ...passed 00:03:23.175 Test: lock_lba_range_then_submit_io ...[2024-07-15 18:17:15.336383] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x12035926a600 not unregistered 00:03:23.175 [2024-07-15 18:17:15.337201] thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x24b248 already registered (old:0x12035926a600 new:0x12035926a780) 00:03:23.175 passed 00:03:23.175 Test: unregister_during_reset ...passed 00:03:23.175 Test: event_notify_and_close ...passed 00:03:23.175 Test: unregister_and_qos_poller ...passed 00:03:23.175 Suite: bdev_wrong_thread 00:03:23.175 Test: spdk_bdev_register_wt ...passed 00:03:23.175 Test: spdk_bdev_examine_wt ...passed[2024-07-15 18:17:15.341871] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8503:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x120359233380 (0x120359233380) 00:03:23.175 [2024-07-15 18:17:15.341912] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x120359233380 (0x120359233380) 00:03:23.175 00:03:23.175 00:03:23.175 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.175 suites 2 2 n/a 0 0 00:03:23.175 tests 24 24 24 0 0 00:03:23.175 asserts 621 621 621 0 n/a 00:03:23.175 00:03:23.175 Elapsed time = 0.039 seconds 00:03:23.175 00:03:23.175 real 0m0.278s 00:03:23.175 user 0m0.178s 00:03:23.175 sys 0m0.080s 00:03:23.175 18:17:15 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.175 18:17:15 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:03:23.175 ************************************ 00:03:23.175 END TEST unittest_bdev 00:03:23.175 ************************************ 00:03:23.175 18:17:15 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:23.175 18:17:15 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:23.175 18:17:15 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:23.175 18:17:15 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:23.175 18:17:15 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:23.175 18:17:15 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:03:23.175 18:17:15 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.175 18:17:15 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.175 18:17:15 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:23.175 ************************************ 00:03:23.175 START TEST unittest_blob_blobfs 00:03:23.175 ************************************ 00:03:23.175 18:17:15 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:03:23.175 18:17:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:23.175 18:17:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:23.175 00:03:23.175 00:03:23.175 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.175 http://cunit.sourceforge.net/ 00:03:23.175 00:03:23.175 00:03:23.175 Suite: blob_nocopy_noextent 00:03:23.175 Test: blob_init ...[2024-07-15 18:17:15.401641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:23.175 passed 00:03:23.175 Test: blob_thin_provision ...passed 00:03:23.175 Test: blob_read_only ...passed 00:03:23.175 Test: bs_load ...[2024-07-15 18:17:15.480563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:23.175 passed 00:03:23.175 Test: bs_load_custom_cluster_size ...passed 00:03:23.175 Test: bs_load_after_failed_grow ...passed 00:03:23.175 Test: bs_cluster_sz ...[2024-07-15 18:17:15.507341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:23.175 [2024-07-15 18:17:15.507423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:23.175 [2024-07-15 18:17:15.507440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:23.175 passed 00:03:23.434 Test: bs_resize_md ...passed 00:03:23.434 Test: bs_destroy ...passed 00:03:23.434 Test: bs_type ...passed 00:03:23.434 Test: bs_super_block ...passed 00:03:23.434 Test: bs_test_recover_cluster_count ...passed 00:03:23.434 Test: bs_grow_live ...passed 00:03:23.434 Test: bs_grow_live_no_space ...passed 00:03:23.434 Test: bs_test_grow ...passed 00:03:23.434 Test: blob_serialize_test ...passed 00:03:23.434 Test: super_block_crc ...passed 00:03:23.434 Test: blob_thin_prov_write_count_io ...passed 00:03:23.434 Test: blob_thin_prov_unmap_cluster ...passed 00:03:23.434 Test: bs_load_iter_test ...passed 00:03:23.434 Test: blob_relations ...[2024-07-15 18:17:15.694917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.434 [2024-07-15 18:17:15.694979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.434 [2024-07-15 18:17:15.695106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.434 [2024-07-15 18:17:15.695119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.434 passed 00:03:23.434 Test: blob_relations2 ...[2024-07-15 18:17:15.708759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.434 [2024-07-15 18:17:15.708818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.434 [2024-07-15 18:17:15.708832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.434 [2024-07-15 18:17:15.708840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.434 [2024-07-15 18:17:15.708987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.434 [2024-07-15 18:17:15.708999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.434 [2024-07-15 18:17:15.709037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.434 [2024-07-15 18:17:15.709046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.434 passed 00:03:23.434 Test: blob_relations3 ...passed 00:03:23.693 Test: blobstore_clean_power_failure ...passed 00:03:23.693 Test: blob_delete_snapshot_power_failure ...[2024-07-15 18:17:15.889579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:23.693 [2024-07-15 18:17:15.902585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:23.693 [2024-07-15 18:17:15.902641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:23.693 [2024-07-15 18:17:15.902650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.693 [2024-07-15 18:17:15.915609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:23.693 [2024-07-15 18:17:15.915650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:23.693 [2024-07-15 18:17:15.915659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:23.693 [2024-07-15 18:17:15.915667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.693 [2024-07-15 18:17:15.928625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:23.693 [2024-07-15 18:17:15.928675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.693 [2024-07-15 18:17:15.941590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:23.693 [2024-07-15 18:17:15.941645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.693 [2024-07-15 18:17:15.954601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:23.693 [2024-07-15 18:17:15.954661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.693 passed 00:03:23.693 Test: blob_create_snapshot_power_failure ...[2024-07-15 18:17:15.993431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:23.693 [2024-07-15 18:17:16.019103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:23.693 [2024-07-15 18:17:16.031986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:23.952 passed 00:03:23.952 Test: blob_io_unit ...passed 00:03:23.952 Test: blob_io_unit_compatibility ...passed 00:03:23.952 Test: blob_ext_md_pages ...passed 00:03:23.952 Test: blob_esnap_io_4096_4096 ...passed 00:03:23.952 Test: blob_esnap_io_512_512 ...passed 00:03:23.952 Test: blob_esnap_io_4096_512 ...passed 00:03:23.952 Test: blob_esnap_io_512_4096 ...passed 00:03:23.952 Test: blob_esnap_clone_resize ...passed 00:03:23.952 Suite: blob_bs_nocopy_noextent 00:03:23.952 Test: blob_open ...passed 00:03:23.952 Test: blob_create ...[2024-07-15 18:17:16.306729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:24.211 passed 00:03:24.211 Test: blob_create_loop ...passed 00:03:24.211 Test: blob_create_fail ...[2024-07-15 18:17:16.399962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:24.211 passed 00:03:24.211 Test: blob_create_internal ...passed 00:03:24.211 Test: blob_create_zero_extent ...passed 00:03:24.211 Test: blob_snapshot ...passed 00:03:24.211 Test: blob_clone ...passed 00:03:24.469 Test: blob_inflate ...[2024-07-15 18:17:16.599194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:24.469 passed 00:03:24.469 Test: blob_delete ...passed 00:03:24.469 Test: blob_resize_test ...[2024-07-15 18:17:16.673306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:24.469 passed 00:03:24.469 Test: blob_resize_thin_test ...passed 00:03:24.469 Test: channel_ops ...passed 00:03:24.469 Test: blob_super ...passed 00:03:24.728 Test: blob_rw_verify_iov ...passed 00:03:24.728 Test: blob_unmap ...passed 00:03:24.728 Test: blob_iter ...passed 00:03:24.728 Test: blob_parse_md ...passed 00:03:24.728 Test: bs_load_pending_removal ...passed 00:03:24.728 Test: bs_unload ...[2024-07-15 18:17:17.021038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:24.728 passed 00:03:24.728 Test: bs_usable_clusters ...passed 00:03:24.986 Test: blob_crc ...[2024-07-15 18:17:17.095012] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:24.986 [2024-07-15 18:17:17.095066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:24.986 passed 00:03:24.986 Test: blob_flags ...passed 00:03:24.986 Test: bs_version ...passed 00:03:24.986 Test: blob_set_xattrs_test ...[2024-07-15 18:17:17.207372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:24.986 [2024-07-15 18:17:17.207421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:24.986 passed 00:03:24.986 Test: blob_thin_prov_alloc ...passed 00:03:24.986 Test: blob_insert_cluster_msg_test ...passed 00:03:24.986 Test: blob_thin_prov_rw ...passed 00:03:25.244 Test: blob_thin_prov_rle ...passed 00:03:25.244 Test: blob_thin_prov_rw_iov ...passed 00:03:25.244 Test: blob_snapshot_rw ...passed 00:03:25.244 Test: blob_snapshot_rw_iov ...passed 00:03:25.244 Test: blob_inflate_rw ...passed 00:03:25.503 Test: blob_snapshot_freeze_io ...passed 00:03:25.503 Test: blob_operation_split_rw ...passed 00:03:25.503 Test: blob_operation_split_rw_iov ...passed 00:03:25.503 Test: blob_simultaneous_operations ...[2024-07-15 18:17:17.752099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:25.503 [2024-07-15 18:17:17.752165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.503 [2024-07-15 18:17:17.752491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:25.503 [2024-07-15 18:17:17.752503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.503 [2024-07-15 18:17:17.756494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:25.503 [2024-07-15 18:17:17.756519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.503 [2024-07-15 18:17:17.756540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:25.503 [2024-07-15 18:17:17.756548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.503 passed 00:03:25.503 Test: blob_persist_test ...passed 00:03:25.761 Test: blob_decouple_snapshot ...passed 00:03:25.761 Test: blob_seek_io_unit ...passed 00:03:25.761 Test: blob_nested_freezes ...passed 00:03:25.761 Test: blob_clone_resize ...passed 00:03:25.761 Test: blob_shallow_copy ...[2024-07-15 18:17:18.011068] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:25.761 [2024-07-15 18:17:18.011172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:25.761 [2024-07-15 18:17:18.011184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:25.761 passed 00:03:25.761 Suite: blob_blob_nocopy_noextent 00:03:25.761 Test: blob_write ...passed 00:03:25.761 Test: blob_read ...passed 00:03:26.020 Test: blob_rw_verify ...passed 00:03:26.020 Test: blob_rw_verify_iov_nomem ...passed 00:03:26.020 Test: blob_rw_iov_read_only ...passed 00:03:26.020 Test: blob_xattr ...passed 00:03:26.020 Test: blob_dirty_shutdown ...passed 00:03:26.020 Test: blob_is_degraded ...passed 00:03:26.020 Suite: blob_esnap_bs_nocopy_noextent 00:03:26.278 Test: blob_esnap_create ...passed 00:03:26.278 Test: blob_esnap_thread_add_remove ...passed 00:03:26.278 Test: blob_esnap_clone_snapshot ...passed 00:03:26.278 Test: blob_esnap_clone_inflate ...passed 00:03:26.278 Test: blob_esnap_clone_decouple ...passed 00:03:26.278 Test: blob_esnap_clone_reload ...passed 00:03:26.278 Test: blob_esnap_hotplug ...passed 00:03:26.537 Test: blob_set_parent ...[2024-07-15 18:17:18.641290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:26.537 [2024-07-15 18:17:18.641353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:26.537 [2024-07-15 18:17:18.641376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:26.537 [2024-07-15 18:17:18.641387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:26.537 [2024-07-15 18:17:18.641452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:26.537 passed 00:03:26.537 Test: blob_set_external_parent ...[2024-07-15 18:17:18.679097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:26.537 [2024-07-15 18:17:18.679143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:26.537 [2024-07-15 18:17:18.679152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:26.537 [2024-07-15 18:17:18.679205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:26.537 passed 00:03:26.537 Suite: blob_nocopy_extent 00:03:26.537 Test: blob_init ...[2024-07-15 18:17:18.691825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:26.537 passed 00:03:26.537 Test: blob_thin_provision ...passed 00:03:26.537 Test: blob_read_only ...passed 00:03:26.537 Test: bs_load ...[2024-07-15 18:17:18.742200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:26.537 passed 00:03:26.537 Test: bs_load_custom_cluster_size ...passed 00:03:26.537 Test: bs_load_after_failed_grow ...passed 00:03:26.537 Test: bs_cluster_sz ...[2024-07-15 18:17:18.767578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:26.537 [2024-07-15 18:17:18.767642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:26.537 [2024-07-15 18:17:18.767656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:26.537 passed 00:03:26.537 Test: bs_resize_md ...passed 00:03:26.537 Test: bs_destroy ...passed 00:03:26.537 Test: bs_type ...passed 00:03:26.537 Test: bs_super_block ...passed 00:03:26.537 Test: bs_test_recover_cluster_count ...passed 00:03:26.537 Test: bs_grow_live ...passed 00:03:26.537 Test: bs_grow_live_no_space ...passed 00:03:26.537 Test: bs_test_grow ...passed 00:03:26.537 Test: blob_serialize_test ...passed 00:03:26.537 Test: super_block_crc ...passed 00:03:26.795 Test: blob_thin_prov_write_count_io ...passed 00:03:26.795 Test: blob_thin_prov_unmap_cluster ...passed 00:03:26.795 Test: bs_load_iter_test ...passed 00:03:26.795 Test: blob_relations ...[2024-07-15 18:17:18.952512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:26.795 [2024-07-15 18:17:18.952579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.795 [2024-07-15 18:17:18.952710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:26.795 [2024-07-15 18:17:18.952722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.795 passed 00:03:26.795 Test: blob_relations2 ...[2024-07-15 18:17:18.966345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:26.795 [2024-07-15 18:17:18.966390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.796 [2024-07-15 18:17:18.966400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:26.796 [2024-07-15 18:17:18.966407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.796 [2024-07-15 18:17:18.966594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:26.796 [2024-07-15 18:17:18.966607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.796 [2024-07-15 18:17:18.966648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:26.796 [2024-07-15 18:17:18.966658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.796 passed 00:03:26.796 Test: blob_relations3 ...passed 00:03:26.796 Test: blobstore_clean_power_failure ...passed 00:03:26.796 Test: blob_delete_snapshot_power_failure ...[2024-07-15 18:17:19.150568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:27.054 [2024-07-15 18:17:19.163885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:27.054 [2024-07-15 18:17:19.177255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:27.054 [2024-07-15 18:17:19.177316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:27.054 [2024-07-15 18:17:19.177325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.054 [2024-07-15 18:17:19.190636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:27.054 [2024-07-15 18:17:19.190689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:27.054 [2024-07-15 18:17:19.190699] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:27.054 [2024-07-15 18:17:19.190707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.054 [2024-07-15 18:17:19.204074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:27.054 [2024-07-15 18:17:19.204118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:27.054 [2024-07-15 18:17:19.204127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:27.054 [2024-07-15 18:17:19.204135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.054 [2024-07-15 18:17:19.217417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:27.054 [2024-07-15 18:17:19.217466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.054 [2024-07-15 18:17:19.230749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:27.054 [2024-07-15 18:17:19.230816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.054 [2024-07-15 18:17:19.244045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:27.054 [2024-07-15 18:17:19.244103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.054 passed 00:03:27.054 Test: blob_create_snapshot_power_failure ...[2024-07-15 18:17:19.284609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:27.054 [2024-07-15 18:17:19.297717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:27.054 [2024-07-15 18:17:19.324112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:27.054 [2024-07-15 18:17:19.337328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:27.054 passed 00:03:27.054 Test: blob_io_unit ...passed 00:03:27.054 Test: blob_io_unit_compatibility ...passed 00:03:27.312 Test: blob_ext_md_pages ...passed 00:03:27.312 Test: blob_esnap_io_4096_4096 ...passed 00:03:27.313 Test: blob_esnap_io_512_512 ...passed 00:03:27.313 Test: blob_esnap_io_4096_512 ...passed 00:03:27.313 Test: blob_esnap_io_512_4096 ...passed 00:03:27.313 Test: blob_esnap_clone_resize ...passed 00:03:27.313 Suite: blob_bs_nocopy_extent 00:03:27.313 Test: blob_open ...passed 00:03:27.313 Test: blob_create ...[2024-07-15 18:17:19.613471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:27.313 passed 00:03:27.570 Test: blob_create_loop ...passed 00:03:27.570 Test: blob_create_fail ...[2024-07-15 18:17:19.705668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:27.570 passed 00:03:27.570 Test: blob_create_internal ...passed 00:03:27.570 Test: blob_create_zero_extent ...passed 00:03:27.570 Test: blob_snapshot ...passed 00:03:27.570 Test: blob_clone ...passed 00:03:27.570 Test: blob_inflate ...[2024-07-15 18:17:19.904229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:27.570 passed 00:03:27.828 Test: blob_delete ...passed 00:03:27.828 Test: blob_resize_test ...[2024-07-15 18:17:19.977152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:27.828 passed 00:03:27.828 Test: blob_resize_thin_test ...passed 00:03:27.828 Test: channel_ops ...passed 00:03:27.828 Test: blob_super ...passed 00:03:27.828 Test: blob_rw_verify_iov ...passed 00:03:27.828 Test: blob_unmap ...passed 00:03:28.110 Test: blob_iter ...passed 00:03:28.110 Test: blob_parse_md ...passed 00:03:28.110 Test: bs_load_pending_removal ...passed 00:03:28.110 Test: bs_unload ...[2024-07-15 18:17:20.331670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:28.110 passed 00:03:28.110 Test: bs_usable_clusters ...passed 00:03:28.110 Test: blob_crc ...[2024-07-15 18:17:20.410796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:28.110 [2024-07-15 18:17:20.410882] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:28.110 passed 00:03:28.370 Test: blob_flags ...passed 00:03:28.370 Test: bs_version ...passed 00:03:28.370 Test: blob_set_xattrs_test ...[2024-07-15 18:17:20.528937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:28.370 [2024-07-15 18:17:20.529001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:28.370 passed 00:03:28.370 Test: blob_thin_prov_alloc ...passed 00:03:28.370 Test: blob_insert_cluster_msg_test ...passed 00:03:28.370 Test: blob_thin_prov_rw ...passed 00:03:28.370 Test: blob_thin_prov_rle ...passed 00:03:28.628 Test: blob_thin_prov_rw_iov ...passed 00:03:28.628 Test: blob_snapshot_rw ...passed 00:03:28.628 Test: blob_snapshot_rw_iov ...passed 00:03:28.628 Test: blob_inflate_rw ...passed 00:03:28.628 Test: blob_snapshot_freeze_io ...passed 00:03:28.887 Test: blob_operation_split_rw ...passed 00:03:28.887 Test: blob_operation_split_rw_iov ...passed 00:03:28.887 Test: blob_simultaneous_operations ...[2024-07-15 18:17:21.099052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:28.887 [2024-07-15 18:17:21.099120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.887 [2024-07-15 18:17:21.099436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:28.887 [2024-07-15 18:17:21.099447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.887 [2024-07-15 18:17:21.103394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:28.887 [2024-07-15 18:17:21.103423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.887 [2024-07-15 18:17:21.103445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:28.887 [2024-07-15 18:17:21.103453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.887 passed 00:03:28.887 Test: blob_persist_test ...passed 00:03:28.887 Test: blob_decouple_snapshot ...passed 00:03:28.887 Test: blob_seek_io_unit ...passed 00:03:29.146 Test: blob_nested_freezes ...passed 00:03:29.146 Test: blob_clone_resize ...passed 00:03:29.146 Test: blob_shallow_copy ...[2024-07-15 18:17:21.356723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:29.146 [2024-07-15 18:17:21.356809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:29.146 [2024-07-15 18:17:21.356822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:29.146 passed 00:03:29.146 Suite: blob_blob_nocopy_extent 00:03:29.146 Test: blob_write ...passed 00:03:29.146 Test: blob_read ...passed 00:03:29.146 Test: blob_rw_verify ...passed 00:03:29.405 Test: blob_rw_verify_iov_nomem ...passed 00:03:29.405 Test: blob_rw_iov_read_only ...passed 00:03:29.405 Test: blob_xattr ...passed 00:03:29.405 Test: blob_dirty_shutdown ...passed 00:03:29.405 Test: blob_is_degraded ...passed 00:03:29.405 Suite: blob_esnap_bs_nocopy_extent 00:03:29.405 Test: blob_esnap_create ...passed 00:03:29.405 Test: blob_esnap_thread_add_remove ...passed 00:03:29.664 Test: blob_esnap_clone_snapshot ...passed 00:03:29.664 Test: blob_esnap_clone_inflate ...passed 00:03:29.664 Test: blob_esnap_clone_decouple ...passed 00:03:29.664 Test: blob_esnap_clone_reload ...passed 00:03:29.664 Test: blob_esnap_hotplug ...passed 00:03:29.664 Test: blob_set_parent ...[2024-07-15 18:17:21.974778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:29.664 [2024-07-15 18:17:21.974845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:29.664 [2024-07-15 18:17:21.974871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:29.664 [2024-07-15 18:17:21.974881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:29.664 [2024-07-15 18:17:21.974948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:29.664 passed 00:03:29.664 Test: blob_set_external_parent ...[2024-07-15 18:17:22.013020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:29.664 [2024-07-15 18:17:22.013079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:29.664 [2024-07-15 18:17:22.013089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:29.664 [2024-07-15 18:17:22.013142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:29.923 passed 00:03:29.923 Suite: blob_copy_noextent 00:03:29.923 Test: blob_init ...[2024-07-15 18:17:22.025984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:29.923 passed 00:03:29.923 Test: blob_thin_provision ...passed 00:03:29.923 Test: blob_read_only ...passed 00:03:29.923 Test: bs_load ...[2024-07-15 18:17:22.077315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:29.923 passed 00:03:29.923 Test: bs_load_custom_cluster_size ...passed 00:03:29.923 Test: bs_load_after_failed_grow ...passed 00:03:29.923 Test: bs_cluster_sz ...[2024-07-15 18:17:22.103662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:29.923 [2024-07-15 18:17:22.103739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:29.923 [2024-07-15 18:17:22.103767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:29.923 passed 00:03:29.923 Test: bs_resize_md ...passed 00:03:29.923 Test: bs_destroy ...passed 00:03:29.923 Test: bs_type ...passed 00:03:29.923 Test: bs_super_block ...passed 00:03:29.923 Test: bs_test_recover_cluster_count ...passed 00:03:29.923 Test: bs_grow_live ...passed 00:03:29.923 Test: bs_grow_live_no_space ...passed 00:03:29.923 Test: bs_test_grow ...passed 00:03:29.923 Test: blob_serialize_test ...passed 00:03:29.923 Test: super_block_crc ...passed 00:03:29.923 Test: blob_thin_prov_write_count_io ...passed 00:03:29.923 Test: blob_thin_prov_unmap_cluster ...passed 00:03:29.923 Test: bs_load_iter_test ...passed 00:03:30.181 Test: blob_relations ...[2024-07-15 18:17:22.288620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.181 [2024-07-15 18:17:22.288683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.181 [2024-07-15 18:17:22.288800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.181 [2024-07-15 18:17:22.288812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.181 passed 00:03:30.181 Test: blob_relations2 ...[2024-07-15 18:17:22.302224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.181 [2024-07-15 18:17:22.302272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.181 [2024-07-15 18:17:22.302281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.182 [2024-07-15 18:17:22.302288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.182 [2024-07-15 18:17:22.302422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.182 [2024-07-15 18:17:22.302434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.182 [2024-07-15 18:17:22.302469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.182 [2024-07-15 18:17:22.302479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.182 passed 00:03:30.182 Test: blob_relations3 ...passed 00:03:30.182 Test: blobstore_clean_power_failure ...passed 00:03:30.182 Test: blob_delete_snapshot_power_failure ...[2024-07-15 18:17:22.481110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:30.182 [2024-07-15 18:17:22.494100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:30.182 [2024-07-15 18:17:22.494154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:30.182 [2024-07-15 18:17:22.494164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.182 [2024-07-15 18:17:22.507090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:30.182 [2024-07-15 18:17:22.507130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:30.182 [2024-07-15 18:17:22.507139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:30.182 [2024-07-15 18:17:22.507147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.182 [2024-07-15 18:17:22.520256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:30.182 [2024-07-15 18:17:22.520319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.182 [2024-07-15 18:17:22.533582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:30.182 [2024-07-15 18:17:22.533641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.440 [2024-07-15 18:17:22.546856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:30.440 [2024-07-15 18:17:22.546915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.440 passed 00:03:30.440 Test: blob_create_snapshot_power_failure ...[2024-07-15 18:17:22.586167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:30.440 [2024-07-15 18:17:22.611863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:30.440 [2024-07-15 18:17:22.624991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:30.440 passed 00:03:30.440 Test: blob_io_unit ...passed 00:03:30.440 Test: blob_io_unit_compatibility ...passed 00:03:30.440 Test: blob_ext_md_pages ...passed 00:03:30.441 Test: blob_esnap_io_4096_4096 ...passed 00:03:30.441 Test: blob_esnap_io_512_512 ...passed 00:03:30.441 Test: blob_esnap_io_4096_512 ...passed 00:03:30.699 Test: blob_esnap_io_512_4096 ...passed 00:03:30.699 Test: blob_esnap_clone_resize ...passed 00:03:30.699 Suite: blob_bs_copy_noextent 00:03:30.699 Test: blob_open ...passed 00:03:30.699 Test: blob_create ...[2024-07-15 18:17:22.897297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:30.699 passed 00:03:30.699 Test: blob_create_loop ...passed 00:03:30.699 Test: blob_create_fail ...[2024-07-15 18:17:22.987898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:30.699 passed 00:03:30.699 Test: blob_create_internal ...passed 00:03:30.957 Test: blob_create_zero_extent ...passed 00:03:30.957 Test: blob_snapshot ...passed 00:03:30.957 Test: blob_clone ...passed 00:03:30.957 Test: blob_inflate ...[2024-07-15 18:17:23.181306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:30.957 passed 00:03:30.957 Test: blob_delete ...passed 00:03:30.957 Test: blob_resize_test ...[2024-07-15 18:17:23.254453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:30.957 passed 00:03:30.957 Test: blob_resize_thin_test ...passed 00:03:31.214 Test: channel_ops ...passed 00:03:31.214 Test: blob_super ...passed 00:03:31.214 Test: blob_rw_verify_iov ...passed 00:03:31.214 Test: blob_unmap ...passed 00:03:31.214 Test: blob_iter ...passed 00:03:31.214 Test: blob_parse_md ...passed 00:03:31.472 Test: bs_load_pending_removal ...passed 00:03:31.472 Test: bs_unload ...[2024-07-15 18:17:23.607148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:31.472 passed 00:03:31.472 Test: bs_usable_clusters ...passed 00:03:31.472 Test: blob_crc ...[2024-07-15 18:17:23.681782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:31.472 [2024-07-15 18:17:23.681858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:31.472 passed 00:03:31.472 Test: blob_flags ...passed 00:03:31.472 Test: bs_version ...passed 00:03:31.472 Test: blob_set_xattrs_test ...[2024-07-15 18:17:23.796903] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:31.472 [2024-07-15 18:17:23.796975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:31.472 passed 00:03:31.745 Test: blob_thin_prov_alloc ...passed 00:03:31.745 Test: blob_insert_cluster_msg_test ...passed 00:03:31.745 Test: blob_thin_prov_rw ...passed 00:03:31.745 Test: blob_thin_prov_rle ...passed 00:03:31.745 Test: blob_thin_prov_rw_iov ...passed 00:03:31.745 Test: blob_snapshot_rw ...passed 00:03:31.745 Test: blob_snapshot_rw_iov ...passed 00:03:32.021 Test: blob_inflate_rw ...passed 00:03:32.021 Test: blob_snapshot_freeze_io ...passed 00:03:32.021 Test: blob_operation_split_rw ...passed 00:03:32.021 Test: blob_operation_split_rw_iov ...passed 00:03:32.021 Test: blob_simultaneous_operations ...[2024-07-15 18:17:24.351046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.022 [2024-07-15 18:17:24.351109] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.022 [2024-07-15 18:17:24.351427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.022 [2024-07-15 18:17:24.351438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.022 [2024-07-15 18:17:24.354149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.022 [2024-07-15 18:17:24.354166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.022 [2024-07-15 18:17:24.354185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.022 [2024-07-15 18:17:24.354192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.022 passed 00:03:32.280 Test: blob_persist_test ...passed 00:03:32.280 Test: blob_decouple_snapshot ...passed 00:03:32.280 Test: blob_seek_io_unit ...passed 00:03:32.280 Test: blob_nested_freezes ...passed 00:03:32.280 Test: blob_clone_resize ...passed 00:03:32.280 Test: blob_shallow_copy ...[2024-07-15 18:17:24.600894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:32.280 [2024-07-15 18:17:24.600965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:32.280 [2024-07-15 18:17:24.600978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:32.280 passed 00:03:32.280 Suite: blob_blob_copy_noextent 00:03:32.539 Test: blob_write ...passed 00:03:32.539 Test: blob_read ...passed 00:03:32.539 Test: blob_rw_verify ...passed 00:03:32.539 Test: blob_rw_verify_iov_nomem ...passed 00:03:32.539 Test: blob_rw_iov_read_only ...passed 00:03:32.539 Test: blob_xattr ...passed 00:03:32.539 Test: blob_dirty_shutdown ...passed 00:03:32.797 Test: blob_is_degraded ...passed 00:03:32.797 Suite: blob_esnap_bs_copy_noextent 00:03:32.797 Test: blob_esnap_create ...passed 00:03:32.797 Test: blob_esnap_thread_add_remove ...passed 00:03:32.797 Test: blob_esnap_clone_snapshot ...passed 00:03:32.797 Test: blob_esnap_clone_inflate ...passed 00:03:32.797 Test: blob_esnap_clone_decouple ...passed 00:03:33.056 Test: blob_esnap_clone_reload ...passed 00:03:33.056 Test: blob_esnap_hotplug ...passed 00:03:33.056 Test: blob_set_parent ...[2024-07-15 18:17:25.227469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:33.056 [2024-07-15 18:17:25.227544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:33.056 [2024-07-15 18:17:25.227569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:33.056 [2024-07-15 18:17:25.227580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:33.056 [2024-07-15 18:17:25.227635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:33.056 passed 00:03:33.056 Test: blob_set_external_parent ...[2024-07-15 18:17:25.265255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:33.056 [2024-07-15 18:17:25.265304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:33.056 [2024-07-15 18:17:25.265313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:33.056 [2024-07-15 18:17:25.265368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:33.056 passed 00:03:33.056 Suite: blob_copy_extent 00:03:33.056 Test: blob_init ...[2024-07-15 18:17:25.277934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:33.056 passed 00:03:33.056 Test: blob_thin_provision ...passed 00:03:33.056 Test: blob_read_only ...passed 00:03:33.056 Test: bs_load ...[2024-07-15 18:17:25.328495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:33.056 passed 00:03:33.056 Test: bs_load_custom_cluster_size ...passed 00:03:33.056 Test: bs_load_after_failed_grow ...passed 00:03:33.056 Test: bs_cluster_sz ...[2024-07-15 18:17:25.353798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:33.056 [2024-07-15 18:17:25.353873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:33.056 [2024-07-15 18:17:25.353888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:33.056 passed 00:03:33.056 Test: bs_resize_md ...passed 00:03:33.056 Test: bs_destroy ...passed 00:03:33.312 Test: bs_type ...passed 00:03:33.312 Test: bs_super_block ...passed 00:03:33.312 Test: bs_test_recover_cluster_count ...passed 00:03:33.312 Test: bs_grow_live ...passed 00:03:33.312 Test: bs_grow_live_no_space ...passed 00:03:33.312 Test: bs_test_grow ...passed 00:03:33.312 Test: blob_serialize_test ...passed 00:03:33.312 Test: super_block_crc ...passed 00:03:33.312 Test: blob_thin_prov_write_count_io ...passed 00:03:33.312 Test: blob_thin_prov_unmap_cluster ...passed 00:03:33.312 Test: bs_load_iter_test ...passed 00:03:33.312 Test: blob_relations ...[2024-07-15 18:17:25.542641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.312 [2024-07-15 18:17:25.542709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.312 [2024-07-15 18:17:25.542837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.312 [2024-07-15 18:17:25.542850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.312 passed 00:03:33.312 Test: blob_relations2 ...[2024-07-15 18:17:25.556690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.312 [2024-07-15 18:17:25.556773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.312 [2024-07-15 18:17:25.556788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.312 [2024-07-15 18:17:25.556800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.312 [2024-07-15 18:17:25.557008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.312 [2024-07-15 18:17:25.557026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.312 [2024-07-15 18:17:25.557080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.312 [2024-07-15 18:17:25.557094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.312 passed 00:03:33.312 Test: blob_relations3 ...passed 00:03:33.569 Test: blobstore_clean_power_failure ...passed 00:03:33.569 Test: blob_delete_snapshot_power_failure ...[2024-07-15 18:17:25.740666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:33.569 [2024-07-15 18:17:25.754184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:33.569 [2024-07-15 18:17:25.767574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:33.569 [2024-07-15 18:17:25.767659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:33.569 [2024-07-15 18:17:25.767674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.569 [2024-07-15 18:17:25.781790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:33.569 [2024-07-15 18:17:25.781840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:33.569 [2024-07-15 18:17:25.781849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:33.569 [2024-07-15 18:17:25.781857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.569 [2024-07-15 18:17:25.795066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:33.569 [2024-07-15 18:17:25.795139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:33.569 [2024-07-15 18:17:25.795152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:33.569 [2024-07-15 18:17:25.795164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.569 [2024-07-15 18:17:25.808338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:33.569 [2024-07-15 18:17:25.808403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.569 [2024-07-15 18:17:25.821446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:33.569 [2024-07-15 18:17:25.821527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.569 [2024-07-15 18:17:25.834753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:33.569 [2024-07-15 18:17:25.834828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.569 passed 00:03:33.570 Test: blob_create_snapshot_power_failure ...[2024-07-15 18:17:25.874346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:33.570 [2024-07-15 18:17:25.887694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:33.570 [2024-07-15 18:17:25.914066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:33.570 [2024-07-15 18:17:25.927227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:33.827 passed 00:03:33.827 Test: blob_io_unit ...passed 00:03:33.827 Test: blob_io_unit_compatibility ...passed 00:03:33.827 Test: blob_ext_md_pages ...passed 00:03:33.827 Test: blob_esnap_io_4096_4096 ...passed 00:03:33.827 Test: blob_esnap_io_512_512 ...passed 00:03:33.827 Test: blob_esnap_io_4096_512 ...passed 00:03:33.827 Test: blob_esnap_io_512_4096 ...passed 00:03:33.827 Test: blob_esnap_clone_resize ...passed 00:03:33.827 Suite: blob_bs_copy_extent 00:03:33.827 Test: blob_open ...passed 00:03:34.084 Test: blob_create ...[2024-07-15 18:17:26.204086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:34.084 passed 00:03:34.084 Test: blob_create_loop ...passed 00:03:34.084 Test: blob_create_fail ...[2024-07-15 18:17:26.296058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.084 passed 00:03:34.084 Test: blob_create_internal ...passed 00:03:34.084 Test: blob_create_zero_extent ...passed 00:03:34.084 Test: blob_snapshot ...passed 00:03:34.341 Test: blob_clone ...passed 00:03:34.341 Test: blob_inflate ...[2024-07-15 18:17:26.498557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:34.341 passed 00:03:34.341 Test: blob_delete ...passed 00:03:34.341 Test: blob_resize_test ...[2024-07-15 18:17:26.572339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:34.341 passed 00:03:34.341 Test: blob_resize_thin_test ...passed 00:03:34.341 Test: channel_ops ...passed 00:03:34.341 Test: blob_super ...passed 00:03:34.598 Test: blob_rw_verify_iov ...passed 00:03:34.598 Test: blob_unmap ...passed 00:03:34.598 Test: blob_iter ...passed 00:03:34.598 Test: blob_parse_md ...passed 00:03:34.598 Test: bs_load_pending_removal ...passed 00:03:34.598 Test: bs_unload ...[2024-07-15 18:17:26.932886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:34.598 passed 00:03:34.854 Test: bs_usable_clusters ...passed 00:03:34.854 Test: blob_crc ...[2024-07-15 18:17:27.010983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:34.854 [2024-07-15 18:17:27.011076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:34.854 passed 00:03:34.854 Test: blob_flags ...passed 00:03:34.854 Test: bs_version ...passed 00:03:34.854 Test: blob_set_xattrs_test ...[2024-07-15 18:17:27.125919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.854 [2024-07-15 18:17:27.125987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.854 passed 00:03:34.854 Test: blob_thin_prov_alloc ...passed 00:03:35.109 Test: blob_insert_cluster_msg_test ...passed 00:03:35.109 Test: blob_thin_prov_rw ...passed 00:03:35.109 Test: blob_thin_prov_rle ...passed 00:03:35.109 Test: blob_thin_prov_rw_iov ...passed 00:03:35.109 Test: blob_snapshot_rw ...passed 00:03:35.109 Test: blob_snapshot_rw_iov ...passed 00:03:35.367 Test: blob_inflate_rw ...passed 00:03:35.367 Test: blob_snapshot_freeze_io ...passed 00:03:35.367 Test: blob_operation_split_rw ...passed 00:03:35.367 Test: blob_operation_split_rw_iov ...passed 00:03:35.367 Test: blob_simultaneous_operations ...[2024-07-15 18:17:27.699009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:35.367 [2024-07-15 18:17:27.699108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.367 [2024-07-15 18:17:27.699456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:35.367 [2024-07-15 18:17:27.699482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.367 [2024-07-15 18:17:27.702414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:35.367 [2024-07-15 18:17:27.702446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.367 [2024-07-15 18:17:27.702476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:35.367 [2024-07-15 18:17:27.702491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.367 passed 00:03:35.624 Test: blob_persist_test ...passed 00:03:35.624 Test: blob_decouple_snapshot ...passed 00:03:35.624 Test: blob_seek_io_unit ...passed 00:03:35.624 Test: blob_nested_freezes ...passed 00:03:35.624 Test: blob_clone_resize ...passed 00:03:35.624 Test: blob_shallow_copy ...[2024-07-15 18:17:27.953055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:35.624 [2024-07-15 18:17:27.953152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:35.624 [2024-07-15 18:17:27.953169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:35.624 passed 00:03:35.624 Suite: blob_blob_copy_extent 00:03:35.887 Test: blob_write ...passed 00:03:35.887 Test: blob_read ...passed 00:03:35.887 Test: blob_rw_verify ...passed 00:03:35.887 Test: blob_rw_verify_iov_nomem ...passed 00:03:35.887 Test: blob_rw_iov_read_only ...passed 00:03:35.887 Test: blob_xattr ...passed 00:03:35.887 Test: blob_dirty_shutdown ...passed 00:03:36.144 Test: blob_is_degraded ...passed 00:03:36.144 Suite: blob_esnap_bs_copy_extent 00:03:36.144 Test: blob_esnap_create ...passed 00:03:36.144 Test: blob_esnap_thread_add_remove ...passed 00:03:36.144 Test: blob_esnap_clone_snapshot ...passed 00:03:36.144 Test: blob_esnap_clone_inflate ...passed 00:03:36.144 Test: blob_esnap_clone_decouple ...passed 00:03:36.402 Test: blob_esnap_clone_reload ...passed 00:03:36.402 Test: blob_esnap_hotplug ...passed 00:03:36.402 Test: blob_set_parent ...[2024-07-15 18:17:28.576906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:36.402 [2024-07-15 18:17:28.576994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:36.402 [2024-07-15 18:17:28.577214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:36.402 [2024-07-15 18:17:28.577235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:36.402 [2024-07-15 18:17:28.577317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:36.402 passed 00:03:36.402 Test: blob_set_external_parent ...[2024-07-15 18:17:28.615758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:36.403 [2024-07-15 18:17:28.615833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:36.403 [2024-07-15 18:17:28.615848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:36.403 [2024-07-15 18:17:28.615922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:36.403 passed 00:03:36.403 00:03:36.403 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.403 suites 16 16 n/a 0 0 00:03:36.403 tests 376 376 376 0 0 00:03:36.403 asserts 143965 143965 143965 0 n/a 00:03:36.403 00:03:36.403 Elapsed time = 13.219 seconds 00:03:36.403 18:17:28 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:36.403 00:03:36.403 00:03:36.403 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.403 http://cunit.sourceforge.net/ 00:03:36.403 00:03:36.403 00:03:36.403 Suite: blob_bdev 00:03:36.403 Test: create_bs_dev ...passed 00:03:36.403 Test: create_bs_dev_ro ...[2024-07-15 18:17:28.639793] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:36.403 passed 00:03:36.403 Test: create_bs_dev_rw ...passed 00:03:36.403 Test: claim_bs_dev ...passed 00:03:36.403 Test: claim_bs_dev_ro ...passed 00:03:36.403 Test: deferred_destroy_refs ...passed 00:03:36.403 Test: deferred_destroy_channels ...passed 00:03:36.403 Test: deferred_destroy_threads ...passed 00:03:36.403 00:03:36.403 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.403 suites 1 1 n/a 0 0 00:03:36.403 tests 8 8 8 0 0 00:03:36.403 asserts 119 119 119 0 n/a 00:03:36.403 00:03:36.403 Elapsed time = 0.000 seconds 00:03:36.403 [2024-07-15 18:17:28.640014] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:36.403 18:17:28 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:36.403 00:03:36.403 00:03:36.403 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.403 http://cunit.sourceforge.net/ 00:03:36.403 00:03:36.403 00:03:36.403 Suite: tree 00:03:36.403 Test: blobfs_tree_op_test ...passed 00:03:36.403 00:03:36.403 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.403 suites 1 1 n/a 0 0 00:03:36.403 tests 1 1 1 0 0 00:03:36.403 asserts 27 27 27 0 n/a 00:03:36.403 00:03:36.403 Elapsed time = 0.000 seconds 00:03:36.403 18:17:28 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:36.403 00:03:36.403 00:03:36.403 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.403 http://cunit.sourceforge.net/ 00:03:36.403 00:03:36.403 00:03:36.403 Suite: blobfs_async_ut 00:03:36.403 Test: fs_init ...passed 00:03:36.403 Test: fs_open ...passed 00:03:36.403 Test: fs_create ...passed 00:03:36.403 Test: fs_truncate ...passed 00:03:36.403 Test: fs_rename ...[2024-07-15 18:17:28.751869] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:36.403 passed 00:03:36.662 Test: fs_rw_async ...passed 00:03:36.662 Test: fs_writev_readv_async ...passed 00:03:36.662 Test: tree_find_buffer_ut ...passed 00:03:36.662 Test: channel_ops ...passed 00:03:36.662 Test: channel_ops_sync ...passed 00:03:36.662 00:03:36.662 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.662 suites 1 1 n/a 0 0 00:03:36.662 tests 10 10 10 0 0 00:03:36.662 asserts 292 292 292 0 n/a 00:03:36.662 00:03:36.662 Elapsed time = 0.148 seconds 00:03:36.662 18:17:28 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:36.662 00:03:36.662 00:03:36.662 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.662 http://cunit.sourceforge.net/ 00:03:36.662 00:03:36.662 00:03:36.662 Suite: blobfs_sync_ut 00:03:36.662 Test: cache_read_after_write ...[2024-07-15 18:17:28.862193] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:36.662 passed 00:03:36.662 Test: file_length ...passed 00:03:36.662 Test: append_write_to_extend_blob ...passed 00:03:36.662 Test: partial_buffer ...passed 00:03:36.662 Test: cache_write_null_buffer ...passed 00:03:36.662 Test: fs_create_sync ...passed 00:03:36.662 Test: fs_rename_sync ...passed 00:03:36.662 Test: cache_append_no_cache ...passed 00:03:36.662 Test: fs_delete_file_without_close ...passed 00:03:36.662 00:03:36.662 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.662 suites 1 1 n/a 0 0 00:03:36.662 tests 9 9 9 0 0 00:03:36.662 asserts 345 345 345 0 n/a 00:03:36.662 00:03:36.662 Elapsed time = 0.297 seconds 00:03:36.662 18:17:28 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:36.662 00:03:36.662 00:03:36.662 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.662 http://cunit.sourceforge.net/ 00:03:36.662 00:03:36.662 00:03:36.662 Suite: blobfs_bdev_ut 00:03:36.662 Test: spdk_blobfs_bdev_detect_test ...passed 00:03:36.662 Test: spdk_blobfs_bdev_create_test ...[2024-07-15 18:17:28.978104] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:36.662 passed 00:03:36.662 Test: spdk_blobfs_bdev_mount_test ...passed 00:03:36.662 00:03:36.662 [2024-07-15 18:17:28.978307] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:36.662 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.662 suites 1 1 n/a 0 0 00:03:36.662 tests 3 3 3 0 0 00:03:36.662 asserts 9 9 9 0 n/a 00:03:36.662 00:03:36.662 Elapsed time = 0.000 seconds 00:03:36.662 00:03:36.662 real 0m13.586s 00:03:36.662 user 0m13.559s 00:03:36.662 sys 0m0.173s 00:03:36.662 18:17:28 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.662 18:17:28 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:03:36.662 ************************************ 00:03:36.662 END TEST unittest_blob_blobfs 00:03:36.662 ************************************ 00:03:36.662 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:36.662 18:17:29 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:03:36.662 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.662 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.662 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:36.662 ************************************ 00:03:36.662 START TEST unittest_event 00:03:36.662 ************************************ 00:03:36.662 18:17:29 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:03:36.662 18:17:29 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:36.662 00:03:36.662 00:03:36.662 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.662 http://cunit.sourceforge.net/ 00:03:36.662 00:03:36.662 00:03:36.662 Suite: app_suite 00:03:36.662 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:36.662 00:03:36.662 CPU options: 00:03:36.662 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:36.662 (like [0,1,10]) 00:03:36.662 --lcores lcore to CPU mapping list. The list is in the format: 00:03:36.662 [<,lcores[@CPUs]>...] 00:03:36.662 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:36.662 Within the group, '-' is used for range separator, 00:03:36.662 ',' is used for single number separator. 00:03:36.662 '( )' can be omitted for single element group, 00:03:36.662 '@' can be omitted if cpus and lcores have the same value 00:03:36.662 --disable-cpumask-locks Disable CPU core lock files. 00:03:36.662 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:36.662 pollers in the app support interrupt mode) 00:03:36.662 -p, --main-core main (primary) core for DPDK 00:03:36.662 00:03:36.662 Configuration options: 00:03:36.662 -c, --config, --json JSON config file 00:03:36.662 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:36.662 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:36.662 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:36.662 --rpcs-allowed comma-separated list of permitted RPCS 00:03:36.662 --json-ignore-init-errors don't exit on invalid config entry 00:03:36.662 00:03:36.662 Memory options: 00:03:36.662 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:36.662 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:36.662 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:36.662 -R, --huge-unlink unlink huge files after initialization 00:03:36.662 -n, --mem-channels number of memory channels used for DPDK 00:03:36.662 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:36.662 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:36.662 --no-huge run without using hugepages 00:03:36.662 -i, --shm-id shared memory ID (optional) 00:03:36.662 -g, --single-file-segments force creating just one hugetlbfs file 00:03:36.662 00:03:36.662 PCI options: 00:03:36.662 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:36.662 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:36.662 -u, --no-pci disable PCI access 00:03:36.662 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:36.662 00:03:36.662 Log options: 00:03:36.662 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:36.662 --silence-noticelog disable notice level logging to stderr 00:03:36.662 00:03:36.662 Trace options: 00:03:36.662 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:36.662 setting 0 to disable trace (default 32768) 00:03:36.662 Tracepoints vary in size and can use more than one trace entry. 00:03:36.662 -e, --tpoint-group [:] 00:03:36.662 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:36.662 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:36.662 a tracepoint group. First tpoint inside a group can be enabled by 00:03:36.662 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:36.662 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:36.662 in /include/spdk_internal/trace_defs.h 00:03:36.662 00:03:36.662 Other options: 00:03:36.662 -h, --help show this usage 00:03:36.662 -v, --version print SPDK version 00:03:36.662 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:36.662 --env-context Opaque context for use of the env implementation 00:03:36.662 app_ut [options] 00:03:36.662 00:03:36.662 CPU options: 00:03:36.662 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:36.662 (like [0,1,10]) 00:03:36.662 --lcores lcore to CPU mapping list. The list is in the format: 00:03:36.662 [<,lcores[@CPUs]>...] 00:03:36.662 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:36.662 Within the group, '-' is used for range separator, 00:03:36.662 ',' is used for single number separator. 00:03:36.662 '( )' can be omitted for single element group, 00:03:36.662 '@' can be omitted if cpus and lcores have the same value 00:03:36.662 --disable-cpumask-locks Disable CPU core lock files. 00:03:36.662 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:36.662 pollers in the app support interrupt mode) 00:03:36.662 -p, --main-core main (primary) core for DPDK 00:03:36.662 00:03:36.662 Configuration options: 00:03:36.662 -c, --config, --json JSON config file 00:03:36.663 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:36.663 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:36.663 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:36.663 --rpcs-allowed comma-separated list of permitted RPCS 00:03:36.663 --json-ignore-init-errors don't exit on invalid config entry 00:03:36.663 00:03:36.663 Memory options: 00:03:36.663 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:36.663 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:36.663 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:36.663 -R, --huge-unlink unlink huge files after initialization 00:03:36.663 -n, --mem-channels number of memory channels used for DPDK 00:03:36.663 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:36.663 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:36.663 --no-huge run without using hugepages 00:03:36.663 -i, --shm-id shared memory ID (optional) 00:03:36.663 -g, --single-file-segments force creating just one hugetlbfs file 00:03:36.663 00:03:36.663 PCI options: 00:03:36.663 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:36.663 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:36.663 -u, --no-pci disable PCI access 00:03:36.663 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:36.663 00:03:36.663 Log options: 00:03:36.663 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:36.663 --silence-noticelog disable notice level logging to stderr 00:03:36.663 00:03:36.663 Trace options: 00:03:36.663 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:36.663 setting 0 to disable trace (default 32768) 00:03:36.663 Tracepoints vary in size and can use more than one trace entry. 00:03:36.663 -e, --tpoint-group [:] 00:03:36.663 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:36.663 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:36.663 a tracepoint group. First tpoint inside a group can be enabled by 00:03:36.663 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:36.663 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:36.663 in /include/spdk_internal/trace_defs.h 00:03:36.663 00:03:36.663 Other options: 00:03:36.663 -h, --help show this usage 00:03:36.663 -v, --version print SPDK version 00:03:36.663 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:36.663 --env-context Opaque context for use of the env implementation 00:03:36.663 app_ut: invalid option -- z 00:03:36.663 app_ut: unrecognized option `--test-long-opt' 00:03:36.663 app_ut [options] 00:03:36.663 00:03:36.663 CPU options: 00:03:36.663 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:36.663 (like [0,1,10]) 00:03:36.663 --lcores lcore to CPU mapping list. The list is in the format: 00:03:36.663 [<,lcores[@CPUs]>...] 00:03:36.663 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:36.663 Within the group, '-' is used for range separator, 00:03:36.663 ',' is used for single number separator. 00:03:36.663 '( )' can be omitted for single element group, 00:03:36.663 '@' can be omitted if cpus and lcores have the same value 00:03:36.663 --disable-cpumask-locks Disable CPU core lock files. 00:03:36.663 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:36.663 pollers in the app support interrupt mode) 00:03:36.663 -p, --main-core main (primary) core for DPDK 00:03:36.663 00:03:36.663 Configuration options: 00:03:36.663 -c, --config, --json JSON config file 00:03:36.663 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:36.663 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:36.663 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:36.663 --rpcs-allowed comma-separated list of permitted RPCS 00:03:36.663 --json-ignore-init-errors don't exit on invalid config entry 00:03:36.663 00:03:36.663 Memory options: 00:03:36.663 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:36.663 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:36.663 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:36.663 -R, --huge-unlink unlink huge files after initialization 00:03:36.663 -n, --mem-channels number of memory channels used for DPDK 00:03:36.663 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:36.663 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:36.663 --no-huge run without using hugepages 00:03:36.663 -i, --shm-id shared memory ID (optional) 00:03:36.663 -g, --single-file-segments force creating just one hugetlbfs file 00:03:36.663 00:03:36.663 PCI options: 00:03:36.663 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:36.663 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:36.663 -u, --no-pci disable PCI access 00:03:36.663 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:36.663 00:03:36.663 Log options: 00:03:36.663 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:36.663 --silence-noticelog disable notice level logging to stderr 00:03:36.663 [2024-07-15 18:17:29.016522] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:36.663 [2024-07-15 18:17:29.016706] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:36.663 00:03:36.663 Trace options: 00:03:36.663 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:36.663 setting 0 to disable trace (default 32768) 00:03:36.663 Tracepoints vary in size and can use more than one trace entry. 00:03:36.663 -e, --tpoint-group [:] 00:03:36.663 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:36.663 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:36.663 a tracepoint group. First tpoint inside a group can be enabled by 00:03:36.663 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:36.663 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:36.663 in /include/spdk_internal/trace_defs.h 00:03:36.663 00:03:36.663 Other options: 00:03:36.663 -h, --help show this usage 00:03:36.663 -v, --version print SPDK version 00:03:36.663 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:36.663 --env-context Opaque context for use of the env implementation 00:03:36.663 passed 00:03:36.663 00:03:36.663 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.663 suites 1 1 n/a 0 0 00:03:36.663 tests 1 1 1 0 0 00:03:36.663 asserts 8 8 8 0 n/a 00:03:36.663 00:03:36.663 Elapsed time = 0.000 seconds 00:03:36.663 [2024-07-15 18:17:29.016786] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:36.663 18:17:29 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:36.922 00:03:36.922 00:03:36.922 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.922 http://cunit.sourceforge.net/ 00:03:36.922 00:03:36.922 00:03:36.922 Suite: app_suite 00:03:36.922 Test: test_create_reactor ...passed 00:03:36.922 Test: test_init_reactors ...passed 00:03:36.922 Test: test_event_call ...passed 00:03:36.922 Test: test_schedule_thread ...passed 00:03:36.922 Test: test_reschedule_thread ...passed 00:03:36.922 Test: test_bind_thread ...passed 00:03:36.922 Test: test_for_each_reactor ...passed 00:03:36.922 Test: test_reactor_stats ...passed 00:03:36.922 Test: test_scheduler ...passed 00:03:36.922 Test: test_governor ...passed 00:03:36.922 00:03:36.922 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.922 suites 1 1 n/a 0 0 00:03:36.922 tests 10 10 10 0 0 00:03:36.922 asserts 336 336 336 0 n/a 00:03:36.922 00:03:36.922 Elapsed time = 0.000 seconds 00:03:36.922 00:03:36.922 real 0m0.012s 00:03:36.922 user 0m0.007s 00:03:36.922 sys 0m0.008s 00:03:36.922 18:17:29 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.922 18:17:29 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:03:36.922 ************************************ 00:03:36.922 END TEST unittest_event 00:03:36.922 ************************************ 00:03:36.922 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:36.922 18:17:29 unittest -- unit/unittest.sh@235 -- # uname -s 00:03:36.922 18:17:29 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:03:36.922 18:17:29 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:36.922 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.922 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.922 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:36.922 ************************************ 00:03:36.922 START TEST unittest_accel 00:03:36.922 ************************************ 00:03:36.922 18:17:29 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:36.922 00:03:36.922 00:03:36.922 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.922 http://cunit.sourceforge.net/ 00:03:36.922 00:03:36.922 00:03:36.922 Suite: accel_sequence 00:03:36.922 Test: test_sequence_fill_copy ...passed 00:03:36.922 Test: test_sequence_abort ...passed 00:03:36.922 Test: test_sequence_append_error ...passed 00:03:36.922 Test: test_sequence_completion_error ...[2024-07-15 18:17:29.062300] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x33fabd2ce540 00:03:36.923 [2024-07-15 18:17:29.062527] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x33fabd2ce540 00:03:36.923 [2024-07-15 18:17:29.062547] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x33fabd2ce540 00:03:36.923 passed 00:03:36.923 Test: test_sequence_decompress ...[2024-07-15 18:17:29.062567] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x33fabd2ce540 00:03:36.923 passed 00:03:36.923 Test: test_sequence_reverse ...passed 00:03:36.923 Test: test_sequence_copy_elision ...passed 00:03:36.923 Test: test_sequence_accel_buffers ...passed 00:03:36.923 Test: test_sequence_memory_domain ...[2024-07-15 18:17:29.063968] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1748:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:36.923 passed 00:03:36.923 Test: test_sequence_module_memory_domain ...[2024-07-15 18:17:29.064010] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1787:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:36.923 passed 00:03:36.923 Test: test_sequence_crypto ...passed 00:03:36.923 Test: test_sequence_driver ...[2024-07-15 18:17:29.064779] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1895:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x33fabd2cec80 using driver: ut 00:03:36.923 [2024-07-15 18:17:29.064817] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x33fabd2cec80 through driver: ut 00:03:36.923 passed 00:03:36.923 Test: test_sequence_same_iovs ...passed 00:03:36.923 Test: test_sequence_crc32 ...passed 00:03:36.923 Suite: accel 00:03:36.923 Test: test_spdk_accel_task_complete ...passed 00:03:36.923 Test: test_get_task ...passed 00:03:36.923 Test: test_spdk_accel_submit_copy ...passed 00:03:36.923 Test: test_spdk_accel_submit_dualcast ...[2024-07-15 18:17:29.065433] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:36.923 [2024-07-15 18:17:29.065450] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:36.923 passed 00:03:36.923 Test: test_spdk_accel_submit_compare ...passed 00:03:36.923 Test: test_spdk_accel_submit_fill ...passed 00:03:36.923 Test: test_spdk_accel_submit_crc32c ...passed 00:03:36.923 Test: test_spdk_accel_submit_crc32cv ...passed 00:03:36.923 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:36.923 Test: test_spdk_accel_submit_xor ...passed 00:03:36.923 Test: test_spdk_accel_module_find_by_name ...passed 00:03:36.923 Test: test_spdk_accel_module_register ...passed 00:03:36.923 00:03:36.923 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.923 suites 2 2 n/a 0 0 00:03:36.923 tests 26 26 26 0 0 00:03:36.923 asserts 830 830 830 0 n/a 00:03:36.923 00:03:36.923 Elapsed time = 0.008 seconds 00:03:36.923 00:03:36.923 real 0m0.011s 00:03:36.923 user 0m0.011s 00:03:36.923 sys 0m0.003s 00:03:36.923 18:17:29 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.923 18:17:29 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:03:36.923 ************************************ 00:03:36.923 END TEST unittest_accel 00:03:36.923 ************************************ 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:36.923 18:17:29 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:36.923 ************************************ 00:03:36.923 START TEST unittest_ioat 00:03:36.923 ************************************ 00:03:36.923 18:17:29 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:36.923 00:03:36.923 00:03:36.923 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.923 http://cunit.sourceforge.net/ 00:03:36.923 00:03:36.923 00:03:36.923 Suite: ioat 00:03:36.923 Test: ioat_state_check ...passed 00:03:36.923 00:03:36.923 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.923 suites 1 1 n/a 0 0 00:03:36.923 tests 1 1 1 0 0 00:03:36.923 asserts 32 32 32 0 n/a 00:03:36.923 00:03:36.923 Elapsed time = 0.000 seconds 00:03:36.923 00:03:36.923 real 0m0.003s 00:03:36.923 user 0m0.000s 00:03:36.923 sys 0m0.008s 00:03:36.923 18:17:29 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.923 ************************************ 00:03:36.923 END TEST unittest_ioat 00:03:36.923 ************************************ 00:03:36.923 18:17:29 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:36.923 18:17:29 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:36.923 18:17:29 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:36.923 ************************************ 00:03:36.923 START TEST unittest_idxd_user 00:03:36.923 ************************************ 00:03:36.923 18:17:29 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:36.923 00:03:36.923 00:03:36.923 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.923 http://cunit.sourceforge.net/ 00:03:36.923 00:03:36.923 00:03:36.923 Suite: idxd_user 00:03:36.923 Test: test_idxd_wait_cmd ...[2024-07-15 18:17:29.145861] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:36.923 passed 00:03:36.923 Test: test_idxd_reset_dev ...passed 00:03:36.923 Test: test_idxd_group_config ...passed 00:03:36.923 Test: test_idxd_wq_config ...passed 00:03:36.923 00:03:36.923 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.923 suites 1 1 n/a 0 0 00:03:36.923 tests 4 4 4 0 0 00:03:36.923 asserts 20 20 20 0 n/a 00:03:36.923 00:03:36.923 Elapsed time = 0.000 seconds 00:03:36.923 [2024-07-15 18:17:29.146046] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:36.923 [2024-07-15 18:17:29.146067] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:36.923 [2024-07-15 18:17:29.146079] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:36.923 00:03:36.923 real 0m0.004s 00:03:36.923 user 0m0.000s 00:03:36.923 sys 0m0.003s 00:03:36.923 18:17:29 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.923 ************************************ 00:03:36.923 END TEST unittest_idxd_user 00:03:36.923 18:17:29 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:03:36.923 ************************************ 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:36.923 18:17:29 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.923 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:36.923 ************************************ 00:03:36.923 START TEST unittest_iscsi 00:03:36.924 ************************************ 00:03:36.924 18:17:29 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:03:36.924 18:17:29 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:36.924 00:03:36.924 00:03:36.924 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.924 http://cunit.sourceforge.net/ 00:03:36.924 00:03:36.924 00:03:36.924 Suite: conn_suite 00:03:36.924 Test: read_task_split_in_order_case ...passed 00:03:36.924 Test: read_task_split_reverse_order_case ...passed 00:03:36.924 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:36.924 Test: process_non_read_task_completion_test ...passed 00:03:36.924 Test: free_tasks_on_connection ...passed 00:03:36.924 Test: free_tasks_with_queued_datain ...passed 00:03:36.924 Test: abort_queued_datain_task_test ...passed 00:03:36.924 Test: abort_queued_datain_tasks_test ...passed 00:03:36.924 00:03:36.924 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.924 suites 1 1 n/a 0 0 00:03:36.924 tests 8 8 8 0 0 00:03:36.924 asserts 230 230 230 0 n/a 00:03:36.924 00:03:36.924 Elapsed time = 0.000 seconds 00:03:36.924 18:17:29 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:36.924 00:03:36.924 00:03:36.924 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.924 http://cunit.sourceforge.net/ 00:03:36.924 00:03:36.924 00:03:36.924 Suite: iscsi_suite 00:03:36.924 Test: param_negotiation_test ...passed 00:03:36.924 Test: list_negotiation_test ...passed 00:03:36.924 Test: parse_valid_test ...passed 00:03:36.924 Test: parse_invalid_test ...[2024-07-15 18:17:29.191641] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:36.924 [2024-07-15 18:17:29.191818] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:36.924 [2024-07-15 18:17:29.191833] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:03:36.924 [2024-07-15 18:17:29.191857] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:36.924 [2024-07-15 18:17:29.191874] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:36.924 [2024-07-15 18:17:29.191886] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:36.924 passed[2024-07-15 18:17:29.191898] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:36.924 00:03:36.924 00:03:36.924 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.924 suites 1 1 n/a 0 0 00:03:36.924 tests 4 4 4 0 0 00:03:36.924 asserts 161 161 161 0 n/a 00:03:36.924 00:03:36.924 Elapsed time = 0.000 seconds 00:03:36.924 18:17:29 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:36.924 00:03:36.924 00:03:36.924 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.924 http://cunit.sourceforge.net/ 00:03:36.924 00:03:36.924 00:03:36.924 Suite: iscsi_target_node_suite 00:03:36.924 Test: add_lun_test_cases ...[2024-07-15 18:17:29.195966] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:36.924 [2024-07-15 18:17:29.196304] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:36.924 [2024-07-15 18:17:29.196550] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:36.924 [2024-07-15 18:17:29.196587] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:36.924 passed 00:03:36.924 Test: allow_any_allowed ...passed 00:03:36.924 Test: allow_ipv6_allowed ...[2024-07-15 18:17:29.196601] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:36.924 passed 00:03:36.924 Test: allow_ipv6_denied ...passed 00:03:36.924 Test: allow_ipv6_invalid ...passed 00:03:36.924 Test: allow_ipv4_allowed ...passed 00:03:36.924 Test: allow_ipv4_denied ...passed 00:03:36.924 Test: allow_ipv4_invalid ...passed 00:03:36.924 Test: node_access_allowed ...passed 00:03:36.924 Test: node_access_denied_by_empty_netmask ...passed 00:03:36.924 Test: node_access_multi_initiator_groups_cases ...passed 00:03:36.924 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:36.924 Test: chap_param_test_cases ...[2024-07-15 18:17:29.196678] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:36.924 [2024-07-15 18:17:29.196700] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:36.924 [2024-07-15 18:17:29.196707] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:36.924 [2024-07-15 18:17:29.196713] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:36.924 passed 00:03:36.924 00:03:36.924 [2024-07-15 18:17:29.196720] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:36.924 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.924 suites 1 1 n/a 0 0 00:03:36.924 tests 13 13 13 0 0 00:03:36.924 asserts 50 50 50 0 n/a 00:03:36.924 00:03:36.924 Elapsed time = 0.000 seconds 00:03:36.924 18:17:29 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:36.924 00:03:36.924 00:03:36.924 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.924 http://cunit.sourceforge.net/ 00:03:36.924 00:03:36.924 00:03:36.924 Suite: iscsi_suite 00:03:36.924 Test: op_login_check_target_test ...[2024-07-15 18:17:29.201802] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:03:36.924 passed 00:03:36.924 Test: op_login_session_normal_test ...[2024-07-15 18:17:29.202005] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:36.924 [2024-07-15 18:17:29.202023] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:36.924 [2024-07-15 18:17:29.202035] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:36.924 [2024-07-15 18:17:29.202068] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:36.924 [2024-07-15 18:17:29.202082] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:36.924 [2024-07-15 18:17:29.202106] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:36.924 [2024-07-15 18:17:29.202118] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:36.924 passed 00:03:36.924 Test: maxburstlength_test ...passed 00:03:36.924 Test: underflow_for_read_transfer_test ...passed 00:03:36.924 Test: underflow_for_zero_read_transfer_test ...[2024-07-15 18:17:29.202173] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:36.924 [2024-07-15 18:17:29.202191] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:36.924 passed 00:03:36.924 Test: underflow_for_request_sense_test ...passed 00:03:36.924 Test: underflow_for_check_condition_test ...passed 00:03:36.924 Test: add_transfer_task_test ...passed 00:03:36.924 Test: get_transfer_task_test ...passed 00:03:36.924 Test: del_transfer_task_test ...passed 00:03:36.924 Test: clear_all_transfer_tasks_test ...passed 00:03:36.924 Test: build_iovs_test ...passed 00:03:36.924 Test: build_iovs_with_md_test ...passed 00:03:36.925 Test: pdu_hdr_op_login_test ...passed 00:03:36.925 Test: pdu_hdr_op_text_test ...passed 00:03:36.925 Test: pdu_hdr_op_logout_test ...passed 00:03:36.925 Test: pdu_hdr_op_scsi_test ...[2024-07-15 18:17:29.202343] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:36.925 [2024-07-15 18:17:29.202361] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:36.925 [2024-07-15 18:17:29.202373] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:36.925 [2024-07-15 18:17:29.202393] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:36.925 [2024-07-15 18:17:29.202406] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:36.925 [2024-07-15 18:17:29.202418] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:36.925 [2024-07-15 18:17:29.202432] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:36.925 passed 00:03:36.925 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-15 18:17:29.202449] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:36.925 [2024-07-15 18:17:29.202460] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:36.925 [2024-07-15 18:17:29.202472] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:36.925 [2024-07-15 18:17:29.202484] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:36.925 [2024-07-15 18:17:29.202497] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:36.925 [2024-07-15 18:17:29.202510] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:36.925 [2024-07-15 18:17:29.202524] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:36.925 [2024-07-15 18:17:29.202537] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:36.925 passed 00:03:36.925 Test: pdu_hdr_op_nopout_test ...passed 00:03:36.925 Test: pdu_hdr_op_data_test ...[2024-07-15 18:17:29.202559] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:36.925 [2024-07-15 18:17:29.202581] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:36.925 [2024-07-15 18:17:29.202593] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:36.925 [2024-07-15 18:17:29.202604] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:36.925 [2024-07-15 18:17:29.202618] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:36.925 passed 00:03:36.925 Test: empty_text_with_cbit_test ...[2024-07-15 18:17:29.202637] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:36.925 [2024-07-15 18:17:29.202649] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:36.925 [2024-07-15 18:17:29.202660] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:36.925 [2024-07-15 18:17:29.202673] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:36.925 [2024-07-15 18:17:29.202684] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:36.925 [2024-07-15 18:17:29.202696] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:36.925 passed 00:03:36.925 Test: pdu_payload_read_test ...[2024-07-15 18:17:29.203097] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:36.925 passed 00:03:36.925 Test: data_out_pdu_sequence_test ...passed 00:03:36.925 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:36.925 00:03:36.925 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.925 suites 1 1 n/a 0 0 00:03:36.925 tests 24 24 24 0 0 00:03:36.925 asserts 150253 150253 150253 0 n/a 00:03:36.925 00:03:36.925 Elapsed time = 0.000 seconds 00:03:36.925 18:17:29 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:36.925 00:03:36.925 00:03:36.925 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.925 http://cunit.sourceforge.net/ 00:03:36.925 00:03:36.925 00:03:36.925 Suite: init_grp_suite 00:03:36.925 Test: create_initiator_group_success_case ...passed 00:03:36.925 Test: find_initiator_group_success_case ...passed 00:03:36.925 Test: register_initiator_group_twice_case ...passed 00:03:36.925 Test: add_initiator_name_success_case ...passed 00:03:36.925 Test: add_initiator_name_fail_case ...passed[2024-07-15 18:17:29.209493] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:36.925 00:03:36.925 Test: delete_all_initiator_names_success_case ...passed 00:03:36.925 Test: add_netmask_success_case ...passed 00:03:36.925 Test: add_netmask_fail_case ...passed 00:03:36.925 Test: delete_all_netmasks_success_case ...passed 00:03:36.925 Test: initiator_name_overwrite_all_to_any_case ...passed 00:03:36.925 Test: netmask_overwrite_all_to_any_case ...passed 00:03:36.925 Test: add_delete_initiator_names_case ...passed 00:03:36.925 Test: add_duplicated_initiator_names_case ...passed 00:03:36.925 Test: delete_nonexisting_initiator_names_case ...passed 00:03:36.925 Test: add_delete_netmasks_case ...passed 00:03:36.925 Test: add_duplicated_netmasks_case ...passed 00:03:36.925 Test: delete_nonexisting_netmasks_case ...passed 00:03:36.925 00:03:36.925 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.925 suites 1 1 n/a 0 0 00:03:36.925 tests 17 17 17 0 0 00:03:36.925 asserts 108 108 108 0 n/a 00:03:36.925 00:03:36.925 Elapsed time = 0.000 seconds 00:03:36.925 [2024-07-15 18:17:29.209684] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:36.925 18:17:29 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:36.925 00:03:36.925 00:03:36.925 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.925 http://cunit.sourceforge.net/ 00:03:36.925 00:03:36.925 00:03:36.925 Suite: portal_grp_suite 00:03:36.925 Test: portal_create_ipv4_normal_case ...passed 00:03:36.925 Test: portal_create_ipv6_normal_case ...passed 00:03:36.925 Test: portal_create_ipv4_wildcard_case ...passed 00:03:36.925 Test: portal_create_ipv6_wildcard_case ...passed 00:03:36.925 Test: portal_create_twice_case ...[2024-07-15 18:17:29.214248] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:36.925 passed 00:03:36.925 Test: portal_grp_register_unregister_case ...passed 00:03:36.925 Test: portal_grp_register_twice_case ...passed 00:03:36.925 Test: portal_grp_add_delete_case ...passed 00:03:36.925 Test: portal_grp_add_delete_twice_case ...passed 00:03:36.925 00:03:36.925 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.925 suites 1 1 n/a 0 0 00:03:36.925 tests 9 9 9 0 0 00:03:36.925 asserts 44 44 44 0 n/a 00:03:36.925 00:03:36.925 Elapsed time = 0.000 seconds 00:03:36.925 00:03:36.925 real 0m0.032s 00:03:36.925 user 0m0.007s 00:03:36.925 sys 0m0.025s 00:03:36.925 18:17:29 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.926 18:17:29 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:03:36.926 ************************************ 00:03:36.926 END TEST unittest_iscsi 00:03:36.926 ************************************ 00:03:36.926 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:36.926 18:17:29 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:03:36.926 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.926 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.926 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:36.926 ************************************ 00:03:36.926 START TEST unittest_json 00:03:36.926 ************************************ 00:03:36.926 18:17:29 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:03:36.926 18:17:29 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:36.926 00:03:36.926 00:03:36.926 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.926 http://cunit.sourceforge.net/ 00:03:36.926 00:03:36.926 00:03:36.926 Suite: json 00:03:36.926 Test: test_parse_literal ...passed 00:03:36.926 Test: test_parse_string_simple ...passed 00:03:36.926 Test: test_parse_string_control_chars ...passed 00:03:36.926 Test: test_parse_string_utf8 ...passed 00:03:36.926 Test: test_parse_string_escapes_twochar ...passed 00:03:36.926 Test: test_parse_string_escapes_unicode ...passed 00:03:36.926 Test: test_parse_number ...passed 00:03:36.926 Test: test_parse_array ...passed 00:03:36.926 Test: test_parse_object ...passed 00:03:36.926 Test: test_parse_nesting ...passed 00:03:36.926 Test: test_parse_comment ...passed 00:03:36.926 00:03:36.926 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.926 suites 1 1 n/a 0 0 00:03:36.926 tests 11 11 11 0 0 00:03:36.926 asserts 1516 1516 1516 0 n/a 00:03:36.926 00:03:36.926 Elapsed time = 0.000 seconds 00:03:36.926 18:17:29 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:36.926 00:03:36.926 00:03:36.926 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.926 http://cunit.sourceforge.net/ 00:03:36.926 00:03:36.926 00:03:36.926 Suite: json 00:03:36.926 Test: test_strequal ...passed 00:03:36.926 Test: test_num_to_uint16 ...passed 00:03:36.926 Test: test_num_to_int32 ...passed 00:03:36.926 Test: test_num_to_uint64 ...passed 00:03:36.926 Test: test_decode_object ...passed 00:03:36.926 Test: test_decode_array ...passed 00:03:36.926 Test: test_decode_bool ...passed 00:03:36.926 Test: test_decode_uint16 ...passed 00:03:36.926 Test: test_decode_int32 ...passed 00:03:36.926 Test: test_decode_uint32 ...passed 00:03:36.926 Test: test_decode_uint64 ...passed 00:03:36.926 Test: test_decode_string ...passed 00:03:36.926 Test: test_decode_uuid ...passed 00:03:36.926 Test: test_find ...passed 00:03:36.926 Test: test_find_array ...passed 00:03:36.926 Test: test_iterating ...passed 00:03:36.926 Test: test_free_object ...passed 00:03:36.926 00:03:36.926 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.926 suites 1 1 n/a 0 0 00:03:36.926 tests 17 17 17 0 0 00:03:36.926 asserts 236 236 236 0 n/a 00:03:36.926 00:03:36.926 Elapsed time = 0.000 seconds 00:03:36.926 18:17:29 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:36.926 00:03:36.926 00:03:36.926 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.926 http://cunit.sourceforge.net/ 00:03:36.926 00:03:36.926 00:03:36.926 Suite: json 00:03:36.926 Test: test_write_literal ...passed 00:03:36.926 Test: test_write_string_simple ...passed 00:03:36.926 Test: test_write_string_escapes ...passed 00:03:36.926 Test: test_write_string_utf16le ...passed 00:03:36.926 Test: test_write_number_int32 ...passed 00:03:36.926 Test: test_write_number_uint32 ...passed 00:03:36.926 Test: test_write_number_uint128 ...passed 00:03:36.926 Test: test_write_string_number_uint128 ...passed 00:03:36.926 Test: test_write_number_int64 ...passed 00:03:36.926 Test: test_write_number_uint64 ...passed 00:03:36.926 Test: test_write_number_double ...passed 00:03:36.926 Test: test_write_uuid ...passed 00:03:36.926 Test: test_write_array ...passed 00:03:36.926 Test: test_write_object ...passed 00:03:36.926 Test: test_write_nesting ...passed 00:03:36.926 Test: test_write_val ...passed 00:03:36.926 00:03:36.926 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.926 suites 1 1 n/a 0 0 00:03:36.926 tests 16 16 16 0 0 00:03:36.926 asserts 918 918 918 0 n/a 00:03:36.926 00:03:36.926 Elapsed time = 0.000 seconds 00:03:36.926 18:17:29 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:36.926 00:03:36.926 00:03:36.926 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.926 http://cunit.sourceforge.net/ 00:03:36.926 00:03:36.926 00:03:36.926 Suite: jsonrpc 00:03:36.926 Test: test_parse_request ...passed 00:03:36.926 Test: test_parse_request_streaming ...passed 00:03:36.926 00:03:36.926 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.926 suites 1 1 n/a 0 0 00:03:36.926 tests 2 2 2 0 0 00:03:36.926 asserts 289 289 289 0 n/a 00:03:36.926 00:03:36.926 Elapsed time = 0.000 seconds 00:03:36.926 00:03:36.926 real 0m0.022s 00:03:36.926 user 0m0.009s 00:03:36.926 sys 0m0.014s 00:03:36.926 18:17:29 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.926 ************************************ 00:03:36.926 18:17:29 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.926 END TEST unittest_json 00:03:36.926 ************************************ 00:03:37.185 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:37.185 18:17:29 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:03:37.185 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.185 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.185 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:37.185 ************************************ 00:03:37.185 START TEST unittest_rpc 00:03:37.185 ************************************ 00:03:37.185 18:17:29 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:03:37.185 18:17:29 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:37.185 00:03:37.185 00:03:37.185 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.185 http://cunit.sourceforge.net/ 00:03:37.185 00:03:37.185 00:03:37.185 Suite: rpc 00:03:37.185 Test: test_jsonrpc_handler ...passed 00:03:37.185 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:37.185 Test: test_rpc_get_methods ...[2024-07-15 18:17:29.309326] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:37.185 passed 00:03:37.185 Test: test_rpc_spdk_get_version ...passed 00:03:37.185 Test: test_spdk_rpc_listen_close ...passed 00:03:37.185 Test: test_rpc_run_multiple_servers ...passed 00:03:37.185 00:03:37.185 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.185 suites 1 1 n/a 0 0 00:03:37.185 tests 6 6 6 0 0 00:03:37.185 asserts 23 23 23 0 n/a 00:03:37.185 00:03:37.185 Elapsed time = 0.000 seconds 00:03:37.185 00:03:37.185 real 0m0.005s 00:03:37.185 user 0m0.004s 00:03:37.185 sys 0m0.004s 00:03:37.185 18:17:29 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.185 18:17:29 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.185 ************************************ 00:03:37.185 END TEST unittest_rpc 00:03:37.185 ************************************ 00:03:37.185 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:37.185 18:17:29 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:37.185 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.185 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.185 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:37.185 ************************************ 00:03:37.185 START TEST unittest_notify 00:03:37.185 ************************************ 00:03:37.185 18:17:29 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:37.185 00:03:37.185 00:03:37.185 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.185 http://cunit.sourceforge.net/ 00:03:37.185 00:03:37.185 00:03:37.185 Suite: app_suite 00:03:37.185 Test: notify ...passed 00:03:37.185 00:03:37.185 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.185 suites 1 1 n/a 0 0 00:03:37.185 tests 1 1 1 0 0 00:03:37.185 asserts 13 13 13 0 n/a 00:03:37.185 00:03:37.185 Elapsed time = 0.000 seconds 00:03:37.185 00:03:37.185 real 0m0.004s 00:03:37.185 user 0m0.004s 00:03:37.185 sys 0m0.004s 00:03:37.185 ************************************ 00:03:37.185 END TEST unittest_notify 00:03:37.186 ************************************ 00:03:37.186 18:17:29 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.186 18:17:29 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:03:37.186 18:17:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:37.186 18:17:29 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:03:37.186 18:17:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.186 18:17:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.186 18:17:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:37.186 ************************************ 00:03:37.186 START TEST unittest_nvme 00:03:37.186 ************************************ 00:03:37.186 18:17:29 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:03:37.186 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:37.186 00:03:37.186 00:03:37.186 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.186 http://cunit.sourceforge.net/ 00:03:37.186 00:03:37.186 00:03:37.186 Suite: nvme 00:03:37.186 Test: test_opc_data_transfer ...passed 00:03:37.186 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:37.186 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:37.186 Test: test_trid_parse_and_compare ...[2024-07-15 18:17:29.382386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:37.186 [2024-07-15 18:17:29.382575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:37.186 [2024-07-15 18:17:29.382592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1212:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:37.186 [2024-07-15 18:17:29.382603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:37.186 [2024-07-15 18:17:29.382614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:03:37.186 [2024-07-15 18:17:29.382624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:37.186 passed 00:03:37.186 Test: test_trid_trtype_str ...passed 00:03:37.186 Test: test_trid_adrfam_str ...passed 00:03:37.186 Test: test_nvme_ctrlr_probe ...passed 00:03:37.186 Test: test_spdk_nvme_probe ...passed 00:03:37.186 Test: test_spdk_nvme_connect ...[2024-07-15 18:17:29.382723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:37.186 [2024-07-15 18:17:29.382749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:37.186 [2024-07-15 18:17:29.382760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:37.186 [2024-07-15 18:17:29.382772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:37.186 [2024-07-15 18:17:29.382783] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:37.186 [2024-07-15 18:17:29.382808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:37.186 passed 00:03:37.186 Test: test_nvme_ctrlr_probe_internal ...passed 00:03:37.186 Test: test_nvme_init_controllers ...[2024-07-15 18:17:29.382889] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:37.186 [2024-07-15 18:17:29.382913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:37.186 [2024-07-15 18:17:29.382924] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:37.186 passed 00:03:37.186 Test: test_nvme_driver_init ...[2024-07-15 18:17:29.382938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:37.186 [2024-07-15 18:17:29.382958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:37.186 [2024-07-15 18:17:29.382969] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:37.186 [2024-07-15 18:17:29.492135] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:37.186 passed 00:03:37.186 Test: test_spdk_nvme_detach ...passed 00:03:37.186 Test: test_nvme_completion_poll_cb ...passed 00:03:37.186 Test: test_nvme_user_copy_cmd_complete ...passed 00:03:37.186 Test: test_nvme_allocate_request_null ...passed 00:03:37.186 Test: test_nvme_allocate_request ...passed 00:03:37.186 Test: test_nvme_free_request ...passed 00:03:37.186 Test: test_nvme_allocate_request_user_copy ...passed 00:03:37.186 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:37.186 Test: test_nvme_request_check_timeout ...passed 00:03:37.186 Test: test_nvme_wait_for_completion ...passed 00:03:37.186 Test: test_spdk_nvme_parse_func ...passed 00:03:37.186 Test: test_spdk_nvme_detach_async ...passed 00:03:37.186 Test: test_nvme_parse_addr ...[2024-07-15 18:17:29.492403] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1635:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:37.186 passed 00:03:37.186 00:03:37.186 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.186 suites 1 1 n/a 0 0 00:03:37.186 tests 25 25 25 0 0 00:03:37.186 asserts 326 326 326 0 n/a 00:03:37.186 00:03:37.186 Elapsed time = 0.000 seconds 00:03:37.186 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:37.186 00:03:37.186 00:03:37.186 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.186 http://cunit.sourceforge.net/ 00:03:37.186 00:03:37.186 00:03:37.186 Suite: nvme_ctrlr 00:03:37.186 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-15 18:17:29.498497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 passed 00:03:37.186 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-15 18:17:29.500078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 passed 00:03:37.186 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-15 18:17:29.501229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 passed 00:03:37.186 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-15 18:17:29.502380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 passed 00:03:37.186 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-15 18:17:29.503541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 [2024-07-15 18:17:29.504668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 18:17:29.505811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 18:17:29.506945] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:37.186 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-15 18:17:29.509245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 [2024-07-15 18:17:29.511558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 18:17:29.512715] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:37.186 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-15 18:17:29.515022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 [2024-07-15 18:17:29.516486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 18:17:29.518819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:37.186 Test: test_nvme_ctrlr_init_delay ...[2024-07-15 18:17:29.521157] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 passed 00:03:37.186 Test: test_alloc_io_qpair_rr_1 ...[2024-07-15 18:17:29.522352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 passed 00:03:37.186 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:03:37.186 Test: test_ctrlr_get_default_io_qpair_opts ...passed[2024-07-15 18:17:29.522412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:37.186 [2024-07-15 18:17:29.522437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:37.186 [2024-07-15 18:17:29.522455] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:37.186 [2024-07-15 18:17:29.522470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:37.186 00:03:37.186 Test: test_alloc_io_qpair_wrr_1 ...passed 00:03:37.186 Test: test_alloc_io_qpair_wrr_2 ...passed 00:03:37.186 Test: test_spdk_nvme_ctrlr_update_firmware ...passed 00:03:37.186 Test: test_nvme_ctrlr_fail ...passed 00:03:37.186 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:03:37.186 Test: test_nvme_ctrlr_set_supported_features ...[2024-07-15 18:17:29.522523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 [2024-07-15 18:17:29.522561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 [2024-07-15 18:17:29.522579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:37.186 [2024-07-15 18:17:29.522602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:37.186 [2024-07-15 18:17:29.522612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:37.186 [2024-07-15 18:17:29.522621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:37.186 [2024-07-15 18:17:29.522631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:37.186 [2024-07-15 18:17:29.522648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:37.186 passed 00:03:37.186 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-15 18:17:29.522682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.186 passed 00:03:37.186 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:37.186 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-15 18:17:29.523859] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:37.446 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:37.446 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:37.446 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-15 18:17:29.554597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-15 18:17:29.561204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-15 18:17:29.562325] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 [2024-07-15 18:17:29.562342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3003:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:37.446 passed 00:03:37.446 Test: test_alloc_io_qpair_fail ...passed 00:03:37.446 Test: test_nvme_ctrlr_add_remove_process ...passed 00:03:37.446 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:37.446 Test: test_nvme_ctrlr_set_state ...[2024-07-15 18:17:29.563448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 [2024-07-15 18:17:29.563469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-15 18:17:29.563497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1547:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:37.446 [2024-07-15 18:17:29.563507] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-15 18:17:29.567223] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-15 18:17:29.573626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_reset ...[2024-07-15 18:17:29.574797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_aer_callback ...[2024-07-15 18:17:29.574855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-15 18:17:29.575993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:37.446 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:37.446 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-15 18:17:29.577220] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:37.446 Test: test_nvme_ctrlr_ana_resize ...[2024-07-15 18:17:29.578370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:37.446 Test: test_nvme_transport_ctrlr_ready ...[2024-07-15 18:17:29.579536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:37.446 [2024-07-15 18:17:29.579565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4205:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:03:37.446 passed 00:03:37.446 Test: test_nvme_ctrlr_disable ...[2024-07-15 18:17:29.579580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:37.446 passed 00:03:37.446 00:03:37.446 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.446 suites 1 1 n/a 0 0 00:03:37.446 tests 44 44 44 0 0 00:03:37.446 asserts 10434 10434 10434 0 n/a 00:03:37.446 00:03:37.446 Elapsed time = 0.039 seconds 00:03:37.446 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:37.446 00:03:37.446 00:03:37.446 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.446 http://cunit.sourceforge.net/ 00:03:37.446 00:03:37.446 00:03:37.446 Suite: nvme_ctrlr_cmd 00:03:37.446 Test: test_get_log_pages ...passed 00:03:37.446 Test: test_set_feature_cmd ...passed 00:03:37.446 Test: test_set_feature_ns_cmd ...passed 00:03:37.446 Test: test_get_feature_cmd ...passed 00:03:37.446 Test: test_get_feature_ns_cmd ...passed 00:03:37.446 Test: test_abort_cmd ...passed 00:03:37.446 Test: test_set_host_id_cmds ...passed 00:03:37.446 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:37.446 Test: test_io_raw_cmd ...passed 00:03:37.446 Test: test_io_raw_cmd_with_md ...passed 00:03:37.446 Test: test_namespace_attach ...passed 00:03:37.446 Test: test_namespace_detach ...passed 00:03:37.446 Test: test_namespace_create ...passed 00:03:37.446 Test: test_namespace_delete ...passed 00:03:37.446 Test: test_doorbell_buffer_config ...passed 00:03:37.446 Test: test_format_nvme ...passed 00:03:37.446 Test: test_fw_commit ...passed 00:03:37.446 Test: test_fw_image_download ...passed 00:03:37.446 Test: test_sanitize ...passed 00:03:37.446 Test: test_directive ...passed 00:03:37.446 Test: test_nvme_request_add_abort ...passed 00:03:37.446 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:37.446 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:37.446 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:37.446 00:03:37.446 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.446 suites 1 1 n/a 0 0 00:03:37.446 tests 24 24 24 0 0 00:03:37.446 asserts 198 198 198 0 n/a 00:03:37.446 00:03:37.446 Elapsed time = 0.000 seconds 00:03:37.446 [2024-07-15 18:17:29.587753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:37.446 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:37.446 00:03:37.446 00:03:37.446 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.446 http://cunit.sourceforge.net/ 00:03:37.446 00:03:37.446 00:03:37.446 Suite: nvme_ctrlr_cmd 00:03:37.446 Test: test_geometry_cmd ...passed 00:03:37.446 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:37.446 00:03:37.446 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.446 suites 1 1 n/a 0 0 00:03:37.446 tests 2 2 2 0 0 00:03:37.446 asserts 7 7 7 0 n/a 00:03:37.446 00:03:37.446 Elapsed time = 0.000 seconds 00:03:37.446 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:37.446 00:03:37.446 00:03:37.446 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.446 http://cunit.sourceforge.net/ 00:03:37.446 00:03:37.446 00:03:37.446 Suite: nvme 00:03:37.446 Test: test_nvme_ns_construct ...passed 00:03:37.446 Test: test_nvme_ns_uuid ...passed 00:03:37.446 Test: test_nvme_ns_csi ...passed 00:03:37.446 Test: test_nvme_ns_data ...passed 00:03:37.446 Test: test_nvme_ns_set_identify_data ...passed 00:03:37.446 Test: test_spdk_nvme_ns_get_values ...passed 00:03:37.446 Test: test_spdk_nvme_ns_is_active ...passed 00:03:37.447 Test: spdk_nvme_ns_supports ...passed 00:03:37.447 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:37.447 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:37.447 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:37.447 Test: test_nvme_ns_find_id_desc ...passed 00:03:37.447 00:03:37.447 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.447 suites 1 1 n/a 0 0 00:03:37.447 tests 12 12 12 0 0 00:03:37.447 asserts 95 95 95 0 n/a 00:03:37.447 00:03:37.447 Elapsed time = 0.000 seconds 00:03:37.447 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:37.447 00:03:37.447 00:03:37.447 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.447 http://cunit.sourceforge.net/ 00:03:37.447 00:03:37.447 00:03:37.447 Suite: nvme_ns_cmd 00:03:37.447 Test: split_test ...passed 00:03:37.447 Test: split_test2 ...passed 00:03:37.447 Test: split_test3 ...passed 00:03:37.447 Test: split_test4 ...passed 00:03:37.447 Test: test_nvme_ns_cmd_flush ...passed 00:03:37.447 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:37.447 Test: test_nvme_ns_cmd_copy ...passed 00:03:37.447 Test: test_io_flags ...passed 00:03:37.447 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:37.447 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:37.447 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:37.447 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:37.447 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:37.447 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:37.447 Test: test_cmd_child_request ...passed 00:03:37.447 Test: test_nvme_ns_cmd_readv ...passed 00:03:37.447 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:37.447 Test: test_nvme_ns_cmd_writev ...[2024-07-15 18:17:29.601327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:37.447 [2024-07-15 18:17:29.601615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:37.447 passed 00:03:37.447 Test: test_nvme_ns_cmd_write_with_md ...passed 00:03:37.447 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:03:37.447 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:37.447 Test: test_nvme_ns_cmd_comparev ...passed 00:03:37.447 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:03:37.447 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:37.447 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:37.447 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:37.447 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:37.447 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:03:37.447 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:03:37.447 Test: test_nvme_ns_cmd_verify ...passed 00:03:37.447 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:37.447 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:03:37.447 00:03:37.447 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.447 suites 1 1 n/a 0 0 00:03:37.447 tests 32 32 32 0 0 00:03:37.447 asserts 550 550 550 0 n/a 00:03:37.447 00:03:37.447 Elapsed time = 0.000 seconds 00:03:37.447 [2024-07-15 18:17:29.601744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:37.447 [2024-07-15 18:17:29.601763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:37.447 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:37.447 00:03:37.447 00:03:37.447 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.447 http://cunit.sourceforge.net/ 00:03:37.447 00:03:37.447 00:03:37.447 Suite: nvme_ns_cmd 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:37.447 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:37.447 00:03:37.447 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.447 suites 1 1 n/a 0 0 00:03:37.447 tests 12 12 12 0 0 00:03:37.447 asserts 123 123 123 0 n/a 00:03:37.447 00:03:37.447 Elapsed time = 0.000 seconds 00:03:37.447 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:37.447 00:03:37.447 00:03:37.447 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.447 http://cunit.sourceforge.net/ 00:03:37.447 00:03:37.447 00:03:37.447 Suite: nvme_qpair 00:03:37.447 Test: test3 ...passed 00:03:37.447 Test: test_ctrlr_failed ...passed 00:03:37.447 Test: struct_packing ...passed 00:03:37.447 Test: test_nvme_qpair_process_completions ...[2024-07-15 18:17:29.612298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:37.447 [2024-07-15 18:17:29.612449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:37.447 passed 00:03:37.447 Test: test_nvme_completion_is_retry ...passed 00:03:37.447 Test: test_get_status_string ...passed 00:03:37.447 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:37.447 Test: test_nvme_qpair_submit_request ...passed 00:03:37.447 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:37.447 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:37.447 Test: test_nvme_qpair_init_deinit ...passed 00:03:37.447 Test: test_nvme_get_sgl_print_info ...passed 00:03:37.447 00:03:37.447 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.447 suites 1 1 n/a 0 0 00:03:37.447 tests 12 12 12 0 0 00:03:37.447 asserts 154 154 154 0 n/a 00:03:37.447 00:03:37.447 Elapsed time = 0.000 seconds 00:03:37.447 [2024-07-15 18:17:29.612495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:37.447 [2024-07-15 18:17:29.612506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:37.447 [2024-07-15 18:17:29.612542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:37.447 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:37.447 00:03:37.447 00:03:37.447 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.447 http://cunit.sourceforge.net/ 00:03:37.447 00:03:37.447 00:03:37.447 Suite: nvme_pcie 00:03:37.447 Test: test_prp_list_append ...passed 00:03:37.447 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-15 18:17:29.617413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:37.447 [2024-07-15 18:17:29.617639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:37.447 [2024-07-15 18:17:29.617655] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:37.447 [2024-07-15 18:17:29.617700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:37.447 [2024-07-15 18:17:29.617721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:37.447 passed 00:03:37.447 Test: test_shadow_doorbell_update ...passed 00:03:37.447 Test: test_build_contig_hw_sgl_request ...passed 00:03:37.447 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:03:37.447 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:37.447 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:37.447 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:03:37.447 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:03:37.447 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:03:37.447 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:03:37.447 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:03:37.447 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:03:37.447 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:03:37.447 00:03:37.447 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.447 suites 1 1 n/a 0 0 00:03:37.447 tests 14 14 14 0 0 00:03:37.447 asserts 235 235 235 0 n/a 00:03:37.447 00:03:37.447 Elapsed time = 0.000 seconds 00:03:37.447 [2024-07-15 18:17:29.617811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:37.447 [2024-07-15 18:17:29.617847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:37.447 [2024-07-15 18:17:29.617863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:37.447 [2024-07-15 18:17:29.617878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:37.447 [2024-07-15 18:17:29.617891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:37.447 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:37.447 00:03:37.447 00:03:37.447 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.447 http://cunit.sourceforge.net/ 00:03:37.447 00:03:37.447 00:03:37.447 Suite: nvme_ns_cmd 00:03:37.447 Test: nvme_poll_group_create_test ...passed 00:03:37.447 Test: nvme_poll_group_add_remove_test ...passed 00:03:37.447 Test: nvme_poll_group_process_completions ...passed 00:03:37.447 Test: nvme_poll_group_destroy_test ...passed 00:03:37.447 Test: nvme_poll_group_get_free_stats ...passed 00:03:37.447 00:03:37.447 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.447 suites 1 1 n/a 0 0 00:03:37.447 tests 5 5 5 0 0 00:03:37.447 asserts 75 75 75 0 n/a 00:03:37.447 00:03:37.448 Elapsed time = 0.000 seconds 00:03:37.448 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:37.448 00:03:37.448 00:03:37.448 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.448 http://cunit.sourceforge.net/ 00:03:37.448 00:03:37.448 00:03:37.448 Suite: nvme_quirks 00:03:37.448 Test: test_nvme_quirks_striping ...passed 00:03:37.448 00:03:37.448 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.448 suites 1 1 n/a 0 0 00:03:37.448 tests 1 1 1 0 0 00:03:37.448 asserts 5 5 5 0 n/a 00:03:37.448 00:03:37.448 Elapsed time = 0.000 seconds 00:03:37.448 18:17:29 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:37.448 00:03:37.448 00:03:37.448 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.448 http://cunit.sourceforge.net/ 00:03:37.448 00:03:37.448 00:03:37.448 Suite: nvme_tcp 00:03:37.448 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:37.448 Test: test_nvme_tcp_build_iovs ...passed 00:03:37.448 Test: test_nvme_tcp_build_sgl_request ...passed 00:03:37.448 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:03:37.448 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:37.448 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:37.448 Test: test_nvme_tcp_req_get ...passed 00:03:37.448 Test: test_nvme_tcp_req_init ...passed 00:03:37.448 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:37.448 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:37.448 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:03:37.448 Test: test_nvme_tcp_alloc_reqs ...passed 00:03:37.448 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-15 18:17:29.630300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8205437d8, and the iovcnt=16, remaining_size=28672 00:03:37.448 [2024-07-15 18:17:29.630522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(6) to be set 00:03:37.448 passed 00:03:37.448 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-15 18:17:29.630559] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 passed 00:03:37.448 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-15 18:17:29.630583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x820544b18 00:03:37.448 [2024-07-15 18:17:29.630596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1250:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:37.448 [2024-07-15 18:17:29.630606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 [2024-07-15 18:17:29.630617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:37.448 [2024-07-15 18:17:29.630628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 [2024-07-15 18:17:29.630640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:37.448 [2024-07-15 18:17:29.630652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 [2024-07-15 18:17:29.630667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 [2024-07-15 18:17:29.630681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 [2024-07-15 18:17:29.630694] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 [2024-07-15 18:17:29.630707] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 [2024-07-15 18:17:29.630720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:37.448 [2024-07-15 18:17:29.630763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:37.448 [2024-07-15 18:17:29.630776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:55.532 passed 00:03:55.532 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:03:55.532 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:03:55.532 Test: test_nvme_tcp_icresp_handle ...[2024-07-15 18:17:45.087401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:55.532 [2024-07-15 18:17:45.087529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820544f50): PDU Sequence Error 00:03:55.532 [2024-07-15 18:17:45.087562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:03:55.532 [2024-07-15 18:17:45.087580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1584:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:03:55.532 [2024-07-15 18:17:45.087601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:55.532 [2024-07-15 18:17:45.087617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:03:55.532 passed 00:03:55.532 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:03:55.532 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-15 18:17:45.087633] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(5) to be set 00:03:55.532 [2024-07-15 18:17:45.087655] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820545388 is same with the state(0) to be set 00:03:55.532 [2024-07-15 18:17:45.087706] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820544f50): PDU Sequence Error 00:03:55.532 [2024-07-15 18:17:45.087740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x820545388 00:03:55.532 passed 00:03:55.532 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:03:55.532 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:03:55.532 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-15 18:17:45.087799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x8205430e8, errno=0, rc=0 00:03:55.532 [2024-07-15 18:17:45.087819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205430e8 is same with the state(5) to be set 00:03:55.532 [2024-07-15 18:17:45.087834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205430e8 is same with the state(5) to be set 00:03:55.532 [2024-07-15 18:17:45.087915] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8205430e8 (0): No error: 0 00:03:55.532 [2024-07-15 18:17:45.087933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8205430e8 (0): No error: 0 00:03:55.532 [2024-07-15 18:17:45.190841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:55.532 [2024-07-15 18:17:45.190906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:55.532 passed 00:03:55.532 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:03:55.532 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-15 18:17:45.190957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:55.532 [2024-07-15 18:17:45.190967] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:55.532 passed 00:03:55.532 Test: test_nvme_tcp_ctrlr_construct ...passed 00:03:55.532 Test: test_nvme_tcp_qpair_submit_request ...passed 00:03:55.532 00:03:55.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.532 suites 1 1 n/a 0 0 00:03:55.532 tests 27 27 27 0 0 00:03:55.532 asserts 624 624 624 0 n/a 00:03:55.532 00:03:55.532 Elapsed time = 0.102 seconds 00:03:55.532 [2024-07-15 18:17:45.191011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:55.532 [2024-07-15 18:17:45.191021] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:55.532 [2024-07-15 18:17:45.191035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:03:55.532 [2024-07-15 18:17:45.191044] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:55.533 [2024-07-15 18:17:45.191059] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2384:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x30307a86b000 with addr=192.168.1.78, port=23 00:03:55.533 [2024-07-15 18:17:45.191068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:55.533 [2024-07-15 18:17:45.191087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x30307a839180, and the iovcnt=1, remaining_size=1024 00:03:55.533 [2024-07-15 18:17:45.191097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:03:55.533 18:17:45 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:03:55.533 00:03:55.533 00:03:55.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.533 http://cunit.sourceforge.net/ 00:03:55.533 00:03:55.533 00:03:55.533 Suite: nvme_transport 00:03:55.533 Test: test_nvme_get_transport ...passed 00:03:55.533 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:03:55.533 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:03:55.533 Test: test_nvme_transport_poll_group_add_remove ...passed 00:03:55.533 Test: test_ctrlr_get_memory_domains ...passed 00:03:55.533 00:03:55.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.533 suites 1 1 n/a 0 0 00:03:55.533 tests 5 5 5 0 0 00:03:55.533 asserts 28 28 28 0 n/a 00:03:55.533 00:03:55.533 Elapsed time = 0.000 seconds 00:03:55.533 18:17:45 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:03:55.533 00:03:55.533 00:03:55.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.533 http://cunit.sourceforge.net/ 00:03:55.533 00:03:55.533 00:03:55.533 Suite: nvme_io_msg 00:03:55.533 Test: test_nvme_io_msg_send ...passed 00:03:55.533 Test: test_nvme_io_msg_process ...passed 00:03:55.533 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:03:55.533 00:03:55.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.533 suites 1 1 n/a 0 0 00:03:55.533 tests 3 3 3 0 0 00:03:55.533 asserts 56 56 56 0 n/a 00:03:55.533 00:03:55.533 Elapsed time = 0.000 seconds 00:03:55.533 18:17:45 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:03:55.533 00:03:55.533 00:03:55.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.533 http://cunit.sourceforge.net/ 00:03:55.533 00:03:55.533 00:03:55.533 Suite: nvme_pcie_common 00:03:55.533 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:03:55.533 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:03:55.533 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:03:55.533 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:03:55.533 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-15 18:17:45.217281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:03:55.533 [2024-07-15 18:17:45.217562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:03:55.533 [2024-07-15 18:17:45.217582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:03:55.533 [2024-07-15 18:17:45.217595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:03:55.533 passed 00:03:55.533 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-15 18:17:45.217700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:55.533 passed 00:03:55.533 00:03:55.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.533 suites 1 1 n/a 0 0 00:03:55.533 tests 6 6 6 0 0 00:03:55.533 asserts 148 148 148 0 n/a 00:03:55.533 00:03:55.533 Elapsed time = 0.000 seconds 00:03:55.533 [2024-07-15 18:17:45.217728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:55.533 18:17:45 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:03:55.533 00:03:55.533 00:03:55.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.533 http://cunit.sourceforge.net/ 00:03:55.533 00:03:55.533 00:03:55.533 Suite: nvme_fabric 00:03:55.533 Test: test_nvme_fabric_prop_set_cmd ...passed 00:03:55.533 Test: test_nvme_fabric_prop_get_cmd ...passed 00:03:55.533 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:03:55.533 Test: test_nvme_fabric_discover_probe ...passed 00:03:55.533 Test: test_nvme_fabric_qpair_connect ...passed 00:03:55.533 00:03:55.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.533 suites 1 1 n/a 0 0 00:03:55.533 tests 5 5 5 0 0 00:03:55.533 asserts 60 60 60 0 n/a 00:03:55.533 00:03:55.533 Elapsed time = 0.000 seconds 00:03:55.533 [2024-07-15 18:17:45.222520] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:03:55.533 18:17:45 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:03:55.533 00:03:55.533 00:03:55.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.533 http://cunit.sourceforge.net/ 00:03:55.533 00:03:55.533 00:03:55.533 Suite: nvme_opal 00:03:55.533 Test: test_opal_nvme_security_recv_send_done ...passed 00:03:55.533 Test: test_opal_add_short_atom_header ...passed 00:03:55.533 00:03:55.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.533 suites 1 1 n/a 0 0 00:03:55.533 tests 2 2 2 0 0 00:03:55.533 asserts 22 22 22 0 n/a 00:03:55.533 00:03:55.533 Elapsed time = 0.000 seconds 00:03:55.533 [2024-07-15 18:17:45.227809] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:03:55.533 00:03:55.533 real 0m15.850s 00:03:55.533 user 0m0.093s 00:03:55.533 sys 0m0.147s 00:03:55.533 18:17:45 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.533 18:17:45 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:03:55.533 ************************************ 00:03:55.533 END TEST unittest_nvme 00:03:55.533 ************************************ 00:03:55.533 18:17:45 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.533 18:17:45 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:55.533 18:17:45 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.533 18:17:45 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.533 18:17:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.533 ************************************ 00:03:55.533 START TEST unittest_log 00:03:55.533 ************************************ 00:03:55.533 18:17:45 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:55.533 00:03:55.533 00:03:55.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.533 http://cunit.sourceforge.net/ 00:03:55.533 00:03:55.533 00:03:55.533 Suite: log 00:03:55.533 Test: log_test ...[2024-07-15 18:17:45.273246] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:03:55.533 [2024-07-15 18:17:45.273446] log_ut.c: 57:log_test: *DEBUG*: log test 00:03:55.533 log dump test: 00:03:55.533 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:03:55.533 spdk dump test: 00:03:55.533 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:03:55.533 spdk dump test: 00:03:55.533 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:03:55.533 passed 00:03:55.533 Test: deprecation ...00000010 65 20 63 68 61 72 73 e chars 00:03:55.533 passed 00:03:55.533 00:03:55.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.533 suites 1 1 n/a 0 0 00:03:55.533 tests 2 2 2 0 0 00:03:55.533 asserts 73 73 73 0 n/a 00:03:55.533 00:03:55.533 Elapsed time = 0.000 seconds 00:03:55.533 00:03:55.533 real 0m1.010s 00:03:55.533 user 0m0.000s 00:03:55.533 sys 0m0.008s 00:03:55.533 18:17:46 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.533 18:17:46 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:03:55.533 ************************************ 00:03:55.533 END TEST unittest_log 00:03:55.533 ************************************ 00:03:55.533 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.533 18:17:46 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:55.533 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.533 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.533 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.533 ************************************ 00:03:55.533 START TEST unittest_lvol 00:03:55.533 ************************************ 00:03:55.533 18:17:46 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:55.533 00:03:55.533 00:03:55.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.533 http://cunit.sourceforge.net/ 00:03:55.533 00:03:55.533 00:03:55.533 Suite: lvol 00:03:55.533 Test: lvs_init_unload_success ...[2024-07-15 18:17:46.329479] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:03:55.533 passed 00:03:55.533 Test: lvs_init_destroy_success ...passed 00:03:55.533 Test: lvs_init_opts_success ...passed 00:03:55.533 Test: lvs_unload_lvs_is_null_fail ...passed 00:03:55.533 Test: lvs_names ...[2024-07-15 18:17:46.329698] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:03:55.533 [2024-07-15 18:17:46.329731] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:03:55.533 [2024-07-15 18:17:46.329745] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:03:55.533 [2024-07-15 18:17:46.329756] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:03:55.533 [2024-07-15 18:17:46.329773] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:03:55.533 passed 00:03:55.533 Test: lvol_create_destroy_success ...passed 00:03:55.534 Test: lvol_create_fail ...passed 00:03:55.534 Test: lvol_destroy_fail ...passed 00:03:55.534 Test: lvol_close ...passed 00:03:55.534 Test: lvol_resize ...[2024-07-15 18:17:46.329820] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:03:55.534 [2024-07-15 18:17:46.329837] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:03:55.534 [2024-07-15 18:17:46.329863] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:03:55.534 [2024-07-15 18:17:46.329882] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:03:55.534 [2024-07-15 18:17:46.329891] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:03:55.534 passed 00:03:55.534 Test: lvol_set_read_only ...passed 00:03:55.534 Test: test_lvs_load ...passed 00:03:55.534 Test: lvols_load ...passed 00:03:55.534 Test: lvol_open ...passed 00:03:55.534 Test: lvol_snapshot ...passed 00:03:55.534 Test: lvol_snapshot_fail ...passed 00:03:55.534 Test: lvol_clone ...[2024-07-15 18:17:46.329951] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:03:55.534 [2024-07-15 18:17:46.329962] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:03:55.534 [2024-07-15 18:17:46.329991] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:55.534 [2024-07-15 18:17:46.330016] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:55.534 [2024-07-15 18:17:46.330087] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:03:55.534 passed 00:03:55.534 Test: lvol_clone_fail ...passed 00:03:55.534 Test: lvol_iter_clones ...passed 00:03:55.534 Test: lvol_refcnt ...passed 00:03:55.534 Test: lvol_names ...passed 00:03:55.534 Test: lvol_create_thin_provisioned ...passed 00:03:55.534 Test: lvol_rename ...passed 00:03:55.534 Test: lvs_rename ...passed 00:03:55.534 Test: lvol_inflate ...passed 00:03:55.534 Test: lvol_decouple_parent ...passed 00:03:55.534 Test: lvol_get_xattr ...passed 00:03:55.534 Test: lvol_esnap_reload ...passed 00:03:55.534 Test: lvol_esnap_create_bad_args ...passed 00:03:55.534 Test: lvol_esnap_create_delete ...[2024-07-15 18:17:46.330134] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:03:55.534 [2024-07-15 18:17:46.330172] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 8927e23d-42d6-11ef-9ade-d5fc5159efa5 because it is still open 00:03:55.534 [2024-07-15 18:17:46.330193] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:55.534 [2024-07-15 18:17:46.330206] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:55.534 [2024-07-15 18:17:46.330223] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:03:55.534 [2024-07-15 18:17:46.330267] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:55.534 [2024-07-15 18:17:46.330285] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:03:55.534 [2024-07-15 18:17:46.330331] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:03:55.534 [2024-07-15 18:17:46.330355] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:55.534 [2024-07-15 18:17:46.330375] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:55.534 [2024-07-15 18:17:46.330413] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:03:55.534 [2024-07-15 18:17:46.330423] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:55.534 [2024-07-15 18:17:46.330434] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:03:55.534 [2024-07-15 18:17:46.330447] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:55.534 [2024-07-15 18:17:46.330471] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:03:55.534 passed 00:03:55.534 Test: lvol_esnap_load_esnaps ...[2024-07-15 18:17:46.330506] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:03:55.534 passed 00:03:55.534 Test: lvol_esnap_missing ...passed 00:03:55.534 Test: lvol_esnap_hotplug ... 00:03:55.534 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:03:55.534 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:03:55.534 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:03:55.534 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:03:55.534 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:03:55.534 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:03:55.534 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:03:55.534 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:03:55.534 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:03:55.534 [2024-07-15 18:17:46.330701] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:55.534 [2024-07-15 18:17:46.330713] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:55.534 [2024-07-15 18:17:46.330776] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 8927f9cb-42d6-11ef-9ade-d5fc5159efa5: failed to create esnap bs_dev: error -12 00:03:55.534 [2024-07-15 18:17:46.330821] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 8927fb69-42d6-11ef-9ade-d5fc5159efa5: failed to create esnap bs_dev: error -12 00:03:55.534 [2024-07-15 18:17:46.330844] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 8927fc77-42d6-11ef-9ade-d5fc5159efa5: failed to create esnap bs_dev: error -12 00:03:55.534 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:03:55.534 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:03:55.534 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:03:55.534 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:03:55.534 passed 00:03:55.534 Test: lvol_get_by ...passed 00:03:55.534 Test: lvol_shallow_copy ...passed 00:03:55.534 Test: lvol_set_parent ...passed 00:03:55.534 Test: lvol_set_external_parent ...[2024-07-15 18:17:46.331003] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:55.534 [2024-07-15 18:17:46.331014] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 892802b1-42d6-11ef-9ade-d5fc5159efa5 shallow copy, ext_dev must not be NULL 00:03:55.534 [2024-07-15 18:17:46.331040] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:03:55.534 [2024-07-15 18:17:46.331050] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:03:55.534 passed 00:03:55.534 00:03:55.534 [2024-07-15 18:17:46.331069] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:03:55.534 [2024-07-15 18:17:46.331079] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:03:55.534 [2024-07-15 18:17:46.331092] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:03:55.534 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.534 suites 1 1 n/a 0 0 00:03:55.534 tests 37 37 37 0 0 00:03:55.534 asserts 1505 1505 1505 0 n/a 00:03:55.534 00:03:55.534 Elapsed time = 0.000 seconds 00:03:55.534 00:03:55.534 real 0m0.008s 00:03:55.534 user 0m0.007s 00:03:55.534 sys 0m0.000s 00:03:55.534 18:17:46 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.534 18:17:46 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:03:55.534 ************************************ 00:03:55.534 END TEST unittest_lvol 00:03:55.534 ************************************ 00:03:55.534 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.534 18:17:46 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:55.534 18:17:46 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:55.534 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.534 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.534 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.534 ************************************ 00:03:55.534 START TEST unittest_nvme_rdma 00:03:55.534 ************************************ 00:03:55.534 18:17:46 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:55.534 00:03:55.534 00:03:55.534 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.534 http://cunit.sourceforge.net/ 00:03:55.534 00:03:55.534 00:03:55.534 Suite: nvme_rdma 00:03:55.534 Test: test_nvme_rdma_build_sgl_request ...passed 00:03:55.534 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:03:55.534 Test: test_nvme_rdma_build_contig_request ...passed 00:03:55.534 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:03:55.534 Test: test_nvme_rdma_create_reqs ...passed 00:03:55.534 Test: test_nvme_rdma_create_rsps ...passed 00:03:55.534 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-15 18:17:46.384436] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:03:55.534 [2024-07-15 18:17:46.384599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1553:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:55.534 [2024-07-15 18:17:46.384627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1609:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:03:55.534 [2024-07-15 18:17:46.384647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1490:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:55.534 [2024-07-15 18:17:46.384668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:03:55.534 [2024-07-15 18:17:46.384701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:03:55.534 [2024-07-15 18:17:46.384721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:55.534 [2024-07-15 18:17:46.384731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:55.534 passed 00:03:55.534 Test: test_nvme_rdma_poller_create ...passed 00:03:55.534 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:03:55.534 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-15 18:17:46.384753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:03:55.534 passed 00:03:55.534 Test: test_nvme_rdma_req_put_and_get ...passed 00:03:55.534 Test: test_nvme_rdma_req_init ...passed 00:03:55.534 Test: test_nvme_rdma_validate_cm_event ...passed 00:03:55.534 Test: test_nvme_rdma_qpair_init ...passed 00:03:55.534 Test: test_nvme_rdma_qpair_submit_request ...passed 00:03:55.534 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:03:55.535 Test: test_rdma_get_memory_translation ...passed 00:03:55.535 Test: test_get_rdma_qpair_from_wc ...passed 00:03:55.535 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:03:55.535 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:03:55.535 Test: test_nvme_rdma_qpair_set_poller ...passed 00:03:55.535 00:03:55.535 [2024-07-15 18:17:46.384805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:03:55.535 [2024-07-15 18:17:46.384816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:03:55.535 [2024-07-15 18:17:46.384835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:03:55.535 [2024-07-15 18:17:46.384848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:03:55.535 [2024-07-15 18:17:46.384865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:55.535 [2024-07-15 18:17:46.384874] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:55.535 [2024-07-15 18:17:46.384895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:55.535 [2024-07-15 18:17:46.384904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:03:55.535 [2024-07-15 18:17:46.384913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820b0e068 on poll group 0x32c0e3472000 00:03:55.535 [2024-07-15 18:17:46.384922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:55.535 [2024-07-15 18:17:46.384930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:03:55.535 [2024-07-15 18:17:46.384938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820b0e068 on poll group 0x32c0e3472000 00:03:55.535 [2024-07-15 18:17:46.384979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:55.535 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.535 suites 1 1 n/a 0 0 00:03:55.535 tests 21 21 21 0 0 00:03:55.535 asserts 397 397 397 0 n/a 00:03:55.535 00:03:55.535 Elapsed time = 0.000 seconds 00:03:55.535 00:03:55.535 real 0m0.007s 00:03:55.535 user 0m0.007s 00:03:55.535 sys 0m0.000s 00:03:55.535 18:17:46 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.535 ************************************ 00:03:55.535 END TEST unittest_nvme_rdma 00:03:55.535 ************************************ 00:03:55.535 18:17:46 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.535 18:17:46 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.535 ************************************ 00:03:55.535 START TEST unittest_nvmf_transport 00:03:55.535 ************************************ 00:03:55.535 18:17:46 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:55.535 00:03:55.535 00:03:55.535 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.535 http://cunit.sourceforge.net/ 00:03:55.535 00:03:55.535 00:03:55.535 Suite: nvmf 00:03:55.535 Test: test_spdk_nvmf_transport_create ...[2024-07-15 18:17:46.428659] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:03:55.535 [2024-07-15 18:17:46.428835] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:03:55.535 [2024-07-15 18:17:46.428853] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:03:55.535 [2024-07-15 18:17:46.428879] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:03:55.535 passed 00:03:55.535 Test: test_nvmf_transport_poll_group_create ...passed 00:03:55.535 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-15 18:17:46.428903] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:03:55.535 [2024-07-15 18:17:46.428912] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:03:55.535 passed 00:03:55.535 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:03:55.535 00:03:55.535 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.535 suites 1 1 n/a 0 0 00:03:55.535 tests 4 4 4 0 0 00:03:55.535 asserts 49 49 49 0 n/a 00:03:55.535 00:03:55.535 Elapsed time = 0.000 seconds 00:03:55.535 [2024-07-15 18:17:46.428923] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:03:55.535 00:03:55.535 real 0m0.005s 00:03:55.535 user 0m0.004s 00:03:55.535 sys 0m0.004s 00:03:55.535 18:17:46 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.535 ************************************ 00:03:55.535 END TEST unittest_nvmf_transport 00:03:55.535 ************************************ 00:03:55.535 18:17:46 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.535 18:17:46 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.535 ************************************ 00:03:55.535 START TEST unittest_rdma 00:03:55.535 ************************************ 00:03:55.535 18:17:46 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:55.535 00:03:55.535 00:03:55.535 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.535 http://cunit.sourceforge.net/ 00:03:55.535 00:03:55.535 00:03:55.535 Suite: rdma_common 00:03:55.535 Test: test_spdk_rdma_pd ...[2024-07-15 18:17:46.473804] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:03:55.535 [2024-07-15 18:17:46.474062] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:03:55.535 passed 00:03:55.535 00:03:55.535 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.535 suites 1 1 n/a 0 0 00:03:55.535 tests 1 1 1 0 0 00:03:55.535 asserts 31 31 31 0 n/a 00:03:55.535 00:03:55.535 Elapsed time = 0.000 seconds 00:03:55.535 00:03:55.535 real 0m0.006s 00:03:55.535 user 0m0.000s 00:03:55.535 sys 0m0.008s 00:03:55.535 18:17:46 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.535 18:17:46 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:55.535 ************************************ 00:03:55.535 END TEST unittest_rdma 00:03:55.535 ************************************ 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.535 18:17:46 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:55.535 18:17:46 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.535 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.535 ************************************ 00:03:55.535 START TEST unittest_nvmf 00:03:55.535 ************************************ 00:03:55.535 18:17:46 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:03:55.535 18:17:46 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:03:55.535 00:03:55.535 00:03:55.535 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.535 http://cunit.sourceforge.net/ 00:03:55.535 00:03:55.535 00:03:55.535 Suite: nvmf 00:03:55.535 Test: test_get_log_page ...[2024-07-15 18:17:46.530064] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:03:55.535 passed 00:03:55.535 Test: test_process_fabrics_cmd ...passed 00:03:55.535 Test: test_connect ...[2024-07-15 18:17:46.530365] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:03:55.535 [2024-07-15 18:17:46.530464] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:03:55.535 [2024-07-15 18:17:46.530485] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:03:55.535 [2024-07-15 18:17:46.530509] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:03:55.535 [2024-07-15 18:17:46.530527] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:03:55.535 [2024-07-15 18:17:46.530544] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:03:55.535 [2024-07-15 18:17:46.530561] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:03:55.535 [2024-07-15 18:17:46.530577] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:03:55.535 [2024-07-15 18:17:46.530594] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:03:55.535 [2024-07-15 18:17:46.530626] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:03:55.535 [2024-07-15 18:17:46.530652] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:03:55.535 [2024-07-15 18:17:46.530683] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:03:55.535 [2024-07-15 18:17:46.530703] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:03:55.535 [2024-07-15 18:17:46.530722] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:03:55.535 passed 00:03:55.535 Test: test_get_ns_id_desc_list ...passed 00:03:55.536 Test: test_identify_ns ...[2024-07-15 18:17:46.530741] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:03:55.536 [2024-07-15 18:17:46.530771] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:03:55.536 [2024-07-15 18:17:46.530797] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:03:55.536 [2024-07-15 18:17:46.530816] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:03:55.536 [2024-07-15 18:17:46.530876] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:55.536 passed 00:03:55.536 Test: test_identify_ns_iocs_specific ...[2024-07-15 18:17:46.530937] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:03:55.536 [2024-07-15 18:17:46.530974] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:03:55.536 [2024-07-15 18:17:46.531010] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:55.536 passed 00:03:55.536 Test: test_reservation_write_exclusive ...passed 00:03:55.536 Test: test_reservation_exclusive_access ...passed 00:03:55.536 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:03:55.536 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:03:55.536 Test: test_reservation_notification_log_page ...passed[2024-07-15 18:17:46.531069] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:55.536 00:03:55.536 Test: test_get_dif_ctx ...passed 00:03:55.536 Test: test_set_get_features ...passed 00:03:55.536 Test: test_identify_ctrlr ...passed 00:03:55.536 Test: test_identify_ctrlr_iocs_specific ...[2024-07-15 18:17:46.531188] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:55.536 [2024-07-15 18:17:46.531207] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:55.536 [2024-07-15 18:17:46.531222] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:03:55.536 [2024-07-15 18:17:46.531237] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:03:55.536 passed 00:03:55.536 Test: test_custom_admin_cmd ...passed 00:03:55.536 Test: test_fused_compare_and_write ...passed 00:03:55.536 Test: test_multi_async_event_reqs ...passed 00:03:55.536 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:03:55.536 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:03:55.536 Test: test_multi_async_events ...passed 00:03:55.536 Test: test_rae ...passed 00:03:55.536 Test: test_nvmf_ctrlr_create_destruct ...[2024-07-15 18:17:46.531341] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:03:55.536 [2024-07-15 18:17:46.531359] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:55.536 [2024-07-15 18:17:46.531376] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:55.536 passed 00:03:55.536 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:03:55.536 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:03:55.536 Test: test_zcopy_read ...passed[2024-07-15 18:17:46.531482] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:03:55.536 [2024-07-15 18:17:46.531503] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:03:55.536 00:03:55.536 Test: test_zcopy_write ...passed 00:03:55.536 Test: test_nvmf_property_set ...passed 00:03:55.536 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:03:55.536 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:03:55.536 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:03:55.536 Test: test_nvmf_check_qpair_active ...passed 00:03:55.536 00:03:55.536 [2024-07-15 18:17:46.531550] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:55.536 [2024-07-15 18:17:46.531573] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:55.536 [2024-07-15 18:17:46.531592] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:03:55.536 [2024-07-15 18:17:46.531608] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:03:55.536 [2024-07-15 18:17:46.531623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:55.536 [2024-07-15 18:17:46.531657] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:03:55.536 [2024-07-15 18:17:46.531673] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4745:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:03:55.536 [2024-07-15 18:17:46.531696] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:03:55.536 [2024-07-15 18:17:46.531711] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:03:55.536 [2024-07-15 18:17:46.531726] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:03:55.536 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.536 suites 1 1 n/a 0 0 00:03:55.536 tests 32 32 32 0 0 00:03:55.536 asserts 977 977 977 0 n/a 00:03:55.536 00:03:55.536 Elapsed time = 0.000 seconds 00:03:55.536 18:17:46 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:03:55.536 00:03:55.536 00:03:55.536 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.536 http://cunit.sourceforge.net/ 00:03:55.536 00:03:55.536 00:03:55.536 Suite: nvmf 00:03:55.536 Test: test_get_rw_params ...passed 00:03:55.536 Test: test_get_rw_ext_params ...passed 00:03:55.536 Test: test_lba_in_range ...passed 00:03:55.536 Test: test_get_dif_ctx ...passed 00:03:55.536 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:03:55.536 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:03:55.536 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:03:55.536 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:03:55.536 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:03:55.536 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:03:55.536 00:03:55.536 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.536 suites 1 1 n/a 0 0 00:03:55.536 tests 10 10 10 0 0 00:03:55.536 asserts 159 159 159 0 n/a 00:03:55.536 00:03:55.536 Elapsed time = 0.000 seconds 00:03:55.536 [2024-07-15 18:17:46.538279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:03:55.536 [2024-07-15 18:17:46.538455] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:03:55.536 [2024-07-15 18:17:46.538468] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:03:55.536 [2024-07-15 18:17:46.538482] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:03:55.536 [2024-07-15 18:17:46.538494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:03:55.536 [2024-07-15 18:17:46.538505] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:03:55.536 [2024-07-15 18:17:46.538518] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:03:55.536 [2024-07-15 18:17:46.538529] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:03:55.536 [2024-07-15 18:17:46.538538] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:03:55.536 18:17:46 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:03:55.536 00:03:55.536 00:03:55.536 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.536 http://cunit.sourceforge.net/ 00:03:55.536 00:03:55.536 00:03:55.536 Suite: nvmf 00:03:55.536 Test: test_discovery_log ...passed 00:03:55.536 Test: test_discovery_log_with_filters ...passed 00:03:55.536 00:03:55.536 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.536 suites 1 1 n/a 0 0 00:03:55.536 tests 2 2 2 0 0 00:03:55.536 asserts 238 238 238 0 n/a 00:03:55.536 00:03:55.536 Elapsed time = 0.000 seconds 00:03:55.536 18:17:46 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:03:55.536 00:03:55.536 00:03:55.536 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.536 http://cunit.sourceforge.net/ 00:03:55.536 00:03:55.536 00:03:55.536 Suite: nvmf 00:03:55.536 Test: nvmf_test_create_subsystem ...[2024-07-15 18:17:46.548984] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:03:55.536 [2024-07-15 18:17:46.549209] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:03:55.536 [2024-07-15 18:17:46.549235] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:03:55.536 [2024-07-15 18:17:46.549250] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:03:55.536 [2024-07-15 18:17:46.549263] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:03:55.536 [2024-07-15 18:17:46.549275] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:03:55.536 [2024-07-15 18:17:46.549287] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:03:55.537 [2024-07-15 18:17:46.549299] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:03:55.537 [2024-07-15 18:17:46.549312] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:03:55.537 [2024-07-15 18:17:46.549323] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:03:55.537 [2024-07-15 18:17:46.549336] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:03:55.537 [2024-07-15 18:17:46.549347] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:03:55.537 [2024-07-15 18:17:46.549380] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:03:55.537 [2024-07-15 18:17:46.549394] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:03:55.537 passed 00:03:55.537 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:03:55.537 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:03:55.537 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:03:55.537 Test: test_spdk_nvmf_ns_visible ...passed 00:03:55.537 Test: test_reservation_register ...passed 00:03:55.537 Test: test_reservation_register_with_ptpl ...passed 00:03:55.537 Test: test_reservation_acquire_preempt_1 ...passed 00:03:55.537 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-15 18:17:46.549424] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:03:55.537 [2024-07-15 18:17:46.549437] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:03:55.537 [2024-07-15 18:17:46.549453] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:03:55.537 [2024-07-15 18:17:46.549465] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:03:55.537 [2024-07-15 18:17:46.549478] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:55.537 [2024-07-15 18:17:46.549490] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:55.537 [2024-07-15 18:17:46.549504] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:55.537 [2024-07-15 18:17:46.549516] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:55.537 [2024-07-15 18:17:46.549578] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:03:55.537 [2024-07-15 18:17:46.549592] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:03:55.537 [2024-07-15 18:17:46.549617] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2158:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:03:55.537 [2024-07-15 18:17:46.549648] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:03:55.537 [2024-07-15 18:17:46.549729] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:55.537 [2024-07-15 18:17:46.549749] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:03:55.537 [2024-07-15 18:17:46.549943] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:55.537 passed 00:03:55.537 Test: test_reservation_release ...passed 00:03:55.537 Test: test_reservation_unregister_notification ...passed 00:03:55.537 Test: test_reservation_release_notification ...passed 00:03:55.537 Test: test_reservation_release_notification_write_exclusive ...passed 00:03:55.537 Test: test_reservation_clear_notification ...[2024-07-15 18:17:46.550119] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:55.537 [2024-07-15 18:17:46.550154] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:55.537 [2024-07-15 18:17:46.550176] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:55.537 [2024-07-15 18:17:46.550198] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:55.537 [2024-07-15 18:17:46.550219] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:55.537 passed 00:03:55.537 Test: test_reservation_preempt_notification ...passed 00:03:55.537 Test: test_spdk_nvmf_ns_event ...passed 00:03:55.537 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:03:55.537 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:03:55.537 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:03:55.537 Test: test_nvmf_ns_reservation_report ...passed 00:03:55.537 Test: test_nvmf_nqn_is_valid ...passed 00:03:55.537 Test: test_nvmf_ns_reservation_restore ...passed 00:03:55.537 Test: test_nvmf_subsystem_state_change ...passed 00:03:55.537 Test: test_nvmf_reservation_custom_ops ...[2024-07-15 18:17:46.550240] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:55.537 [2024-07-15 18:17:46.550354] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:03:55.537 [2024-07-15 18:17:46.550381] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:03:55.537 [2024-07-15 18:17:46.550404] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3466:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:03:55.537 [2024-07-15 18:17:46.550434] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:03:55.537 [2024-07-15 18:17:46.550447] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:89497df0-42d6-11ef-9ade-d5fc5159efa": uuid is not the correct length 00:03:55.537 [2024-07-15 18:17:46.550460] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:03:55.537 [2024-07-15 18:17:46.550495] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:03:55.537 passed 00:03:55.537 00:03:55.537 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.537 suites 1 1 n/a 0 0 00:03:55.537 tests 24 24 24 0 0 00:03:55.537 asserts 499 499 499 0 n/a 00:03:55.537 00:03:55.537 Elapsed time = 0.000 seconds 00:03:55.537 18:17:46 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:03:55.537 00:03:55.537 00:03:55.537 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.537 http://cunit.sourceforge.net/ 00:03:55.537 00:03:55.537 00:03:55.537 Suite: nvmf 00:03:55.537 Test: test_nvmf_tcp_create ...[2024-07-15 18:17:46.560104] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:03:55.537 passed 00:03:55.537 Test: test_nvmf_tcp_destroy ...passed 00:03:55.537 Test: test_nvmf_tcp_poll_group_create ...passed 00:03:55.537 Test: test_nvmf_tcp_send_c2h_data ...passed 00:03:55.537 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:03:55.537 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:03:55.537 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:03:55.537 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-15 18:17:46.571294] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.537 [2024-07-15 18:17:46.571334] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.537 [2024-07-15 18:17:46.571346] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.537 passed 00:03:55.537 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:03:55.537 Test: test_nvmf_tcp_icreq_handle ...passed 00:03:55.537 Test: test_nvmf_tcp_check_xfer_type ...passed 00:03:55.537 Test: test_nvmf_tcp_invalid_sgl ...passed 00:03:55.537 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-15 18:17:46.571355] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.537 [2024-07-15 18:17:46.571363] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.537 [2024-07-15 18:17:46.571394] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:55.537 [2024-07-15 18:17:46.571403] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.537 [2024-07-15 18:17:46.571412] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e8d0 is same with the state(5) to be set 00:03:55.537 [2024-07-15 18:17:46.571420] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:55.538 [2024-07-15 18:17:46.571429] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e8d0 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571437] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571445] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e8d0 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571454] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571462] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e8d0 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571478] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2518:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:03:55.538 [2024-07-15 18:17:46.571487] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571495] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e8d0 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571506] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x82031e158 00:03:55.538 [2024-07-15 18:17:46.571515] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571523] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571532] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2308:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x82031e9c8 00:03:55.538 [2024-07-15 18:17:46.571541] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571548] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571560] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:03:55.538 [2024-07-15 18:17:46.571569] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571577] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571586] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:03:55.538 [2024-07-15 18:17:46.571594] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571617] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571625] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571633] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571641] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571650] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571658] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571669] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 passed 00:03:55.538 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-15 18:17:46.571677] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571708] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571716] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 [2024-07-15 18:17:46.571725] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:55.538 [2024-07-15 18:17:46.571733] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82031e9c8 is same with the state(5) to be set 00:03:55.538 passed 00:03:55.538 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-15 18:17:46.576807] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:03:55.538 [2024-07-15 18:17:46.576829] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:03:55.538 passed 00:03:55.538 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:03:55.538 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:03:55.538 00:03:55.538 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.538 suites 1 1 n/a 0 0 00:03:55.538 tests 17 17 17 0 0 00:03:55.538 asserts 222 222 222 0 n/a 00:03:55.538 00:03:55.538 Elapsed time = 0.016 seconds 00:03:55.538 [2024-07-15 18:17:46.576941] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:03:55.538 [2024-07-15 18:17:46.576955] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:03:55.538 [2024-07-15 18:17:46.577019] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:03:55.538 [2024-07-15 18:17:46.577030] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:03:55.538 18:17:46 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:03:55.538 00:03:55.538 00:03:55.538 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.538 http://cunit.sourceforge.net/ 00:03:55.538 00:03:55.538 00:03:55.538 Suite: nvmf 00:03:55.538 Test: test_nvmf_tgt_create_poll_group ...passed 00:03:55.538 00:03:55.538 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.538 suites 1 1 n/a 0 0 00:03:55.538 tests 1 1 1 0 0 00:03:55.538 asserts 17 17 17 0 n/a 00:03:55.538 00:03:55.538 Elapsed time = 0.000 seconds 00:03:55.538 00:03:55.538 real 0m0.062s 00:03:55.538 user 0m0.029s 00:03:55.538 sys 0m0.037s 00:03:55.538 18:17:46 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.538 ************************************ 00:03:55.538 END TEST unittest_nvmf 00:03:55.538 ************************************ 00:03:55.538 18:17:46 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:03:55.538 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.538 18:17:46 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:55.538 18:17:46 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:55.538 18:17:46 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:55.538 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.538 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.538 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.538 ************************************ 00:03:55.538 START TEST unittest_nvmf_rdma 00:03:55.538 ************************************ 00:03:55.538 18:17:46 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:55.538 00:03:55.538 00:03:55.538 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.538 http://cunit.sourceforge.net/ 00:03:55.538 00:03:55.538 00:03:55.538 Suite: nvmf 00:03:55.538 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-15 18:17:46.634052] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:03:55.538 [2024-07-15 18:17:46.634217] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:03:55.538 [2024-07-15 18:17:46.634231] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:03:55.538 passed 00:03:55.538 Test: test_spdk_nvmf_rdma_request_process ...passed 00:03:55.538 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:03:55.538 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:03:55.538 Test: test_nvmf_rdma_opts_init ...passed 00:03:55.538 Test: test_nvmf_rdma_request_free_data ...passed 00:03:55.538 Test: test_nvmf_rdma_resources_create ...passed 00:03:55.538 Test: test_nvmf_rdma_qpair_compare ...passed 00:03:55.538 Test: test_nvmf_rdma_resize_cq ...[2024-07-15 18:17:46.634895] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:03:55.538 Using CQ of insufficient size may lead to CQ overrun 00:03:55.538 passed 00:03:55.538 00:03:55.538 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.538 suites 1 1 n/a 0 0 00:03:55.538 tests 9 9 9 0 0 00:03:55.538 asserts 579 579 579 0 n/a 00:03:55.538 00:03:55.538 Elapsed time = 0.000 seconds 00:03:55.538 [2024-07-15 18:17:46.634910] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:03:55.538 [2024-07-15 18:17:46.634947] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:55.538 00:03:55.538 real 0m0.005s 00:03:55.538 user 0m0.000s 00:03:55.538 sys 0m0.008s 00:03:55.538 18:17:46 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.538 18:17:46 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:55.538 ************************************ 00:03:55.538 END TEST unittest_nvmf_rdma 00:03:55.538 ************************************ 00:03:55.538 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.538 18:17:46 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:55.538 18:17:46 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:03:55.538 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.538 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.538 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.538 ************************************ 00:03:55.538 START TEST unittest_scsi 00:03:55.538 ************************************ 00:03:55.538 18:17:46 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:03:55.538 18:17:46 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:03:55.538 00:03:55.538 00:03:55.539 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.539 http://cunit.sourceforge.net/ 00:03:55.539 00:03:55.539 00:03:55.539 Suite: dev_suite 00:03:55.539 Test: dev_destruct_null_dev ...passed 00:03:55.539 Test: dev_destruct_zero_luns ...passed 00:03:55.539 Test: dev_destruct_null_lun ...passed 00:03:55.539 Test: dev_destruct_success ...passed 00:03:55.539 Test: dev_construct_num_luns_zero ...passed 00:03:55.539 Test: dev_construct_no_lun_zero ...passed 00:03:55.539 Test: dev_construct_null_lun ...[2024-07-15 18:17:46.684337] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:03:55.539 [2024-07-15 18:17:46.684570] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:03:55.539 [2024-07-15 18:17:46.684594] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:03:55.539 passed 00:03:55.539 Test: dev_construct_name_too_long ...passed 00:03:55.539 Test: dev_construct_success ...passed 00:03:55.539 Test: dev_construct_success_lun_zero_not_first ...passed 00:03:55.539 Test: dev_queue_mgmt_task_success ...passed 00:03:55.539 Test: dev_queue_task_success ...passed 00:03:55.539 Test: dev_stop_success ...passed 00:03:55.539 Test: dev_add_port_max_ports ...passed 00:03:55.539 Test: dev_add_port_construct_failure1 ...passed 00:03:55.539 Test: dev_add_port_construct_failure2 ...passed 00:03:55.539 Test: dev_add_port_success1 ...passed 00:03:55.539 Test: dev_add_port_success2 ...passed 00:03:55.539 Test: dev_add_port_success3 ...passed 00:03:55.539 Test: dev_find_port_by_id_num_ports_zero ...passed 00:03:55.539 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:03:55.539 Test: dev_find_port_by_id_success ...passed 00:03:55.539 Test: dev_add_lun_bdev_not_found ...passed 00:03:55.539 Test: dev_add_lun_no_free_lun_id ...[2024-07-15 18:17:46.684612] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:03:55.539 [2024-07-15 18:17:46.684677] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:03:55.539 [2024-07-15 18:17:46.684695] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:03:55.539 [2024-07-15 18:17:46.684717] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:03:55.539 passed 00:03:55.539 Test: dev_add_lun_success1 ...passed 00:03:55.539 Test: dev_add_lun_success2 ...passed 00:03:55.539 Test: dev_check_pending_tasks ...passed 00:03:55.539 Test: dev_iterate_luns ...passed 00:03:55.539 Test: dev_find_free_lun ...[2024-07-15 18:17:46.684966] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:03:55.539 passed 00:03:55.539 00:03:55.539 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.539 suites 1 1 n/a 0 0 00:03:55.539 tests 29 29 29 0 0 00:03:55.539 asserts 97 97 97 0 n/a 00:03:55.539 00:03:55.539 Elapsed time = 0.000 seconds 00:03:55.539 18:17:46 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:03:55.539 00:03:55.539 00:03:55.539 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.539 http://cunit.sourceforge.net/ 00:03:55.539 00:03:55.539 00:03:55.539 Suite: lun_suite 00:03:55.539 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-15 18:17:46.692211] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:03:55.539 passed 00:03:55.539 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:03:55.539 Test: lun_task_mgmt_execute_lun_reset ...passed 00:03:55.539 Test: lun_task_mgmt_execute_target_reset ...passed 00:03:55.539 Test: lun_task_mgmt_execute_invalid_case ...passed 00:03:55.539 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-07-15 18:17:46.692469] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:03:55.539 [2024-07-15 18:17:46.692494] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:03:55.539 passed 00:03:55.539 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:03:55.539 Test: lun_append_task_null_lun_not_supported ...passed 00:03:55.539 Test: lun_execute_scsi_task_pending ...passed 00:03:55.539 Test: lun_execute_scsi_task_complete ...passed 00:03:55.539 Test: lun_execute_scsi_task_resize ...passed 00:03:55.539 Test: lun_destruct_success ...passed 00:03:55.539 Test: lun_construct_null_ctx ...passed 00:03:55.539 Test: lun_construct_success ...passed 00:03:55.539 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:03:55.539 Test: lun_reset_task_suspend_scsi_task ...passed 00:03:55.539 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:03:55.539 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:03:55.539 00:03:55.539 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.539 suites 1 1 n/a 0 0 00:03:55.539 tests 18 18 18 0 0 00:03:55.539 asserts 153 153 153 0 n/a 00:03:55.539 00:03:55.539 Elapsed time = 0.000 seconds 00:03:55.539 [2024-07-15 18:17:46.692696] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:03:55.539 18:17:46 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:03:55.539 00:03:55.539 00:03:55.539 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.539 http://cunit.sourceforge.net/ 00:03:55.539 00:03:55.539 00:03:55.539 Suite: scsi_suite 00:03:55.539 Test: scsi_init ...passed 00:03:55.539 00:03:55.539 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.539 suites 1 1 n/a 0 0 00:03:55.539 tests 1 1 1 0 0 00:03:55.539 asserts 1 1 1 0 n/a 00:03:55.539 00:03:55.539 Elapsed time = 0.000 seconds 00:03:55.539 18:17:46 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:03:55.539 00:03:55.539 00:03:55.539 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.539 http://cunit.sourceforge.net/ 00:03:55.539 00:03:55.539 00:03:55.539 Suite: translation_suite 00:03:55.539 Test: mode_select_6_test ...passed 00:03:55.539 Test: mode_select_6_test2 ...passed 00:03:55.539 Test: mode_sense_6_test ...passed 00:03:55.539 Test: mode_sense_10_test ...passed 00:03:55.539 Test: inquiry_evpd_test ...passed 00:03:55.539 Test: inquiry_standard_test ...passed 00:03:55.539 Test: inquiry_overflow_test ...passed 00:03:55.539 Test: task_complete_test ...passed 00:03:55.539 Test: lba_range_test ...passed 00:03:55.539 Test: xfer_len_test ...[2024-07-15 18:17:46.705273] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:03:55.539 passed 00:03:55.539 Test: xfer_test ...passed 00:03:55.539 Test: scsi_name_padding_test ...passed 00:03:55.539 Test: get_dif_ctx_test ...passed 00:03:55.539 Test: unmap_split_test ...passed 00:03:55.539 00:03:55.539 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.539 suites 1 1 n/a 0 0 00:03:55.539 tests 14 14 14 0 0 00:03:55.539 asserts 1205 1205 1205 0 n/a 00:03:55.539 00:03:55.539 Elapsed time = 0.000 seconds 00:03:55.539 18:17:46 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:03:55.539 00:03:55.539 00:03:55.539 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.539 http://cunit.sourceforge.net/ 00:03:55.539 00:03:55.539 00:03:55.539 Suite: reservation_suite 00:03:55.539 Test: test_reservation_register ...passed 00:03:55.539 Test: test_reservation_reserve ...passed 00:03:55.539 Test: test_all_registrant_reservation_reserve ...passed 00:03:55.539 Test: test_all_registrant_reservation_access ...passed 00:03:55.539 Test: test_reservation_preempt_non_all_regs ...passed 00:03:55.539 Test: test_reservation_preempt_all_regs ...[2024-07-15 18:17:46.712004] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:55.539 [2024-07-15 18:17:46.712297] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:55.539 [2024-07-15 18:17:46.712322] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:03:55.539 [2024-07-15 18:17:46.712340] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:03:55.539 [2024-07-15 18:17:46.712365] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:55.539 [2024-07-15 18:17:46.712403] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:55.539 [2024-07-15 18:17:46.712430] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:03:55.539 [2024-07-15 18:17:46.712446] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:03:55.539 [2024-07-15 18:17:46.712469] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:55.539 [2024-07-15 18:17:46.712486] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:03:55.539 [2024-07-15 18:17:46.712511] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:55.539 passed 00:03:55.539 Test: test_reservation_cmds_conflict ...[2024-07-15 18:17:46.712544] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:55.539 [2024-07-15 18:17:46.712563] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:03:55.539 passed 00:03:55.539 Test: test_scsi2_reserve_release ...passed 00:03:55.539 Test: test_pr_with_scsi2_reserve_release ...passed 00:03:55.539 00:03:55.539 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.539 suites 1 1 n/a 0 0 00:03:55.539 tests 9 9 9 0 0 00:03:55.539 asserts 344 344 344 0 n/a 00:03:55.539 00:03:55.539 Elapsed time = 0.000 seconds 00:03:55.539 [2024-07-15 18:17:46.712580] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:55.539 [2024-07-15 18:17:46.712595] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:55.539 [2024-07-15 18:17:46.712610] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:55.540 [2024-07-15 18:17:46.712625] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:55.540 [2024-07-15 18:17:46.712657] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:55.540 00:03:55.540 real 0m0.034s 00:03:55.540 user 0m0.000s 00:03:55.540 sys 0m0.034s 00:03:55.540 18:17:46 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.540 18:17:46 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:03:55.540 ************************************ 00:03:55.540 END TEST unittest_scsi 00:03:55.540 ************************************ 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.540 18:17:46 unittest -- unit/unittest.sh@278 -- # uname -s 00:03:55.540 18:17:46 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:03:55.540 18:17:46 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.540 ************************************ 00:03:55.540 START TEST unittest_thread 00:03:55.540 ************************************ 00:03:55.540 18:17:46 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:55.540 00:03:55.540 00:03:55.540 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.540 http://cunit.sourceforge.net/ 00:03:55.540 00:03:55.540 00:03:55.540 Suite: io_channel 00:03:55.540 Test: thread_alloc ...passed 00:03:55.540 Test: thread_send_msg ...passed 00:03:55.540 Test: thread_poller ...passed 00:03:55.540 Test: poller_pause ...passed 00:03:55.540 Test: thread_for_each ...passed 00:03:55.540 Test: for_each_channel_remove ...passed 00:03:55.540 Test: for_each_channel_unreg ...[2024-07-15 18:17:46.766375] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x82029f9c4 already registered (old:0x11d336667000 new:0x11d336667180) 00:03:55.540 passed 00:03:55.540 Test: thread_name ...passed 00:03:55.540 Test: channel ...passed 00:03:55.540 Test: channel_destroy_races ...[2024-07-15 18:17:46.766953] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x228838 00:03:55.540 passed 00:03:55.540 Test: thread_exit_test ...[2024-07-15 18:17:46.767437] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 640:thread_exit: *ERROR*: thread 0x11d33662ca80 got timeout, and move it to the exited state forcefully 00:03:55.540 passed 00:03:55.540 Test: thread_update_stats_test ...passed 00:03:55.540 Test: nested_channel ...passed 00:03:55.540 Test: device_unregister_and_thread_exit_race ...passed 00:03:55.540 Test: cache_closest_timed_poller ...passed 00:03:55.540 Test: multi_timed_pollers_have_same_expiration ...passed 00:03:55.540 Test: io_device_lookup ...passed 00:03:55.540 Test: spdk_spin ...[2024-07-15 18:17:46.768545] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:55.540 [2024-07-15 18:17:46.768566] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82029f9c0 00:03:55.540 [2024-07-15 18:17:46.768577] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:55.540 [2024-07-15 18:17:46.768730] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:55.540 [2024-07-15 18:17:46.768741] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82029f9c0 00:03:55.540 [2024-07-15 18:17:46.768751] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:55.540 [2024-07-15 18:17:46.768760] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82029f9c0 00:03:55.540 [2024-07-15 18:17:46.768770] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:55.540 [2024-07-15 18:17:46.768779] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82029f9c0 00:03:55.540 [2024-07-15 18:17:46.768789] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:03:55.540 [2024-07-15 18:17:46.768799] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82029f9c0 00:03:55.540 passed 00:03:55.540 Test: for_each_channel_and_thread_exit_race ...passed 00:03:55.540 Test: for_each_thread_and_thread_exit_race ...passed 00:03:55.540 00:03:55.540 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.540 suites 1 1 n/a 0 0 00:03:55.540 tests 20 20 20 0 0 00:03:55.540 asserts 409 409 409 0 n/a 00:03:55.540 00:03:55.540 Elapsed time = 0.008 seconds 00:03:55.540 00:03:55.540 real 0m0.012s 00:03:55.540 user 0m0.012s 00:03:55.540 sys 0m0.000s 00:03:55.540 18:17:46 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.540 18:17:46 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:03:55.540 ************************************ 00:03:55.540 END TEST unittest_thread 00:03:55.540 ************************************ 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.540 18:17:46 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.540 ************************************ 00:03:55.540 START TEST unittest_iobuf 00:03:55.540 ************************************ 00:03:55.540 18:17:46 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:55.540 00:03:55.540 00:03:55.540 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.540 http://cunit.sourceforge.net/ 00:03:55.540 00:03:55.540 00:03:55.540 Suite: io_channel 00:03:55.540 Test: iobuf ...passed 00:03:55.540 Test: iobuf_cache ...[2024-07-15 18:17:46.819241] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:55.540 passed 00:03:55.540 00:03:55.540 [2024-07-15 18:17:46.819471] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:55.540 [2024-07-15 18:17:46.819510] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:03:55.540 [2024-07-15 18:17:46.819524] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:55.540 [2024-07-15 18:17:46.819540] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:55.540 [2024-07-15 18:17:46.819552] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:55.540 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.540 suites 1 1 n/a 0 0 00:03:55.540 tests 2 2 2 0 0 00:03:55.540 asserts 107 107 107 0 n/a 00:03:55.540 00:03:55.540 Elapsed time = 0.000 seconds 00:03:55.540 00:03:55.540 real 0m0.006s 00:03:55.540 user 0m0.000s 00:03:55.540 sys 0m0.008s 00:03:55.540 ************************************ 00:03:55.540 END TEST unittest_iobuf 00:03:55.540 ************************************ 00:03:55.540 18:17:46 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.540 18:17:46 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.540 18:17:46 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.540 18:17:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.540 ************************************ 00:03:55.540 START TEST unittest_util 00:03:55.540 ************************************ 00:03:55.540 18:17:46 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:03:55.540 18:17:46 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:03:55.540 00:03:55.540 00:03:55.540 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.540 http://cunit.sourceforge.net/ 00:03:55.540 00:03:55.540 00:03:55.540 Suite: base64 00:03:55.540 Test: test_base64_get_encoded_strlen ...passed 00:03:55.540 Test: test_base64_get_decoded_len ...passed 00:03:55.540 Test: test_base64_encode ...passed 00:03:55.540 Test: test_base64_decode ...passed 00:03:55.540 Test: test_base64_urlsafe_encode ...passed 00:03:55.540 Test: test_base64_urlsafe_decode ...passed 00:03:55.540 00:03:55.540 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.540 suites 1 1 n/a 0 0 00:03:55.540 tests 6 6 6 0 0 00:03:55.540 asserts 112 112 112 0 n/a 00:03:55.540 00:03:55.540 Elapsed time = 0.000 seconds 00:03:55.540 18:17:46 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:03:55.540 00:03:55.540 00:03:55.540 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.540 http://cunit.sourceforge.net/ 00:03:55.540 00:03:55.540 00:03:55.540 Suite: bit_array 00:03:55.540 Test: test_1bit ...passed 00:03:55.540 Test: test_64bit ...passed 00:03:55.540 Test: test_find ...passed 00:03:55.540 Test: test_resize ...passed 00:03:55.540 Test: test_errors ...passed 00:03:55.540 Test: test_count ...passed 00:03:55.540 Test: test_mask_store_load ...passed 00:03:55.540 Test: test_mask_clear ...passed 00:03:55.540 00:03:55.540 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.540 suites 1 1 n/a 0 0 00:03:55.540 tests 8 8 8 0 0 00:03:55.540 asserts 5075 5075 5075 0 n/a 00:03:55.540 00:03:55.540 Elapsed time = 0.000 seconds 00:03:55.540 18:17:46 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:03:55.540 00:03:55.540 00:03:55.540 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.540 http://cunit.sourceforge.net/ 00:03:55.540 00:03:55.540 00:03:55.541 Suite: cpuset 00:03:55.541 Test: test_cpuset ...passed 00:03:55.541 Test: test_cpuset_parse ...[2024-07-15 18:17:46.876512] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:03:55.541 [2024-07-15 18:17:46.876765] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:03:55.541 [2024-07-15 18:17:46.876785] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:03:55.541 [2024-07-15 18:17:46.876799] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:03:55.541 [2024-07-15 18:17:46.876812] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:03:55.541 [2024-07-15 18:17:46.876825] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:03:55.541 [2024-07-15 18:17:46.876838] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:03:55.541 [2024-07-15 18:17:46.876850] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:03:55.541 passed 00:03:55.541 Test: test_cpuset_fmt ...passed 00:03:55.541 Test: test_cpuset_foreach ...passed 00:03:55.541 00:03:55.541 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.541 suites 1 1 n/a 0 0 00:03:55.541 tests 4 4 4 0 0 00:03:55.541 asserts 90 90 90 0 n/a 00:03:55.541 00:03:55.541 Elapsed time = 0.000 seconds 00:03:55.541 18:17:46 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:03:55.541 00:03:55.541 00:03:55.541 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.541 http://cunit.sourceforge.net/ 00:03:55.541 00:03:55.541 00:03:55.541 Suite: crc16 00:03:55.541 Test: test_crc16_t10dif ...passed 00:03:55.541 Test: test_crc16_t10dif_seed ...passed 00:03:55.541 Test: test_crc16_t10dif_copy ...passed 00:03:55.541 00:03:55.541 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.541 suites 1 1 n/a 0 0 00:03:55.541 tests 3 3 3 0 0 00:03:55.541 asserts 5 5 5 0 n/a 00:03:55.541 00:03:55.541 Elapsed time = 0.000 seconds 00:03:55.541 18:17:46 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:03:55.541 00:03:55.541 00:03:55.541 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.541 http://cunit.sourceforge.net/ 00:03:55.541 00:03:55.541 00:03:55.541 Suite: crc32_ieee 00:03:55.541 Test: test_crc32_ieee ...passed 00:03:55.541 00:03:55.541 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.541 suites 1 1 n/a 0 0 00:03:55.541 tests 1 1 1 0 0 00:03:55.541 asserts 1 1 1 0 n/a 00:03:55.541 00:03:55.541 Elapsed time = 0.000 seconds 00:03:55.541 18:17:46 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:03:55.541 00:03:55.541 00:03:55.541 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.541 http://cunit.sourceforge.net/ 00:03:55.541 00:03:55.541 00:03:55.541 Suite: crc32c 00:03:55.541 Test: test_crc32c ...passed 00:03:55.541 Test: test_crc32c_nvme ...passed 00:03:55.541 00:03:55.541 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.541 suites 1 1 n/a 0 0 00:03:55.541 tests 2 2 2 0 0 00:03:55.541 asserts 16 16 16 0 n/a 00:03:55.541 00:03:55.541 Elapsed time = 0.000 seconds 00:03:55.541 18:17:46 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:03:55.541 00:03:55.541 00:03:55.541 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.541 http://cunit.sourceforge.net/ 00:03:55.541 00:03:55.541 00:03:55.541 Suite: crc64 00:03:55.541 Test: test_crc64_nvme ...passed 00:03:55.541 00:03:55.541 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.541 suites 1 1 n/a 0 0 00:03:55.541 tests 1 1 1 0 0 00:03:55.541 asserts 4 4 4 0 n/a 00:03:55.541 00:03:55.541 Elapsed time = 0.000 seconds 00:03:55.541 18:17:46 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:03:55.541 00:03:55.541 00:03:55.541 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.541 http://cunit.sourceforge.net/ 00:03:55.541 00:03:55.541 00:03:55.541 Suite: string 00:03:55.541 Test: test_parse_ip_addr ...passed 00:03:55.541 Test: test_str_chomp ...passed 00:03:55.541 Test: test_parse_capacity ...passed 00:03:55.541 Test: test_sprintf_append_realloc ...passed 00:03:55.541 Test: test_strtol ...passed 00:03:55.541 Test: test_strtoll ...passed 00:03:55.541 Test: test_strarray ...passed 00:03:55.541 Test: test_strcpy_replace ...passed 00:03:55.541 00:03:55.541 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.541 suites 1 1 n/a 0 0 00:03:55.541 tests 8 8 8 0 0 00:03:55.541 asserts 161 161 161 0 n/a 00:03:55.541 00:03:55.541 Elapsed time = 0.000 seconds 00:03:55.541 18:17:46 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:03:55.541 00:03:55.541 00:03:55.541 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.541 http://cunit.sourceforge.net/ 00:03:55.541 00:03:55.541 00:03:55.541 Suite: dif 00:03:55.541 Test: dif_generate_and_verify_test ...[2024-07-15 18:17:46.907105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:55.541 [2024-07-15 18:17:46.907317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:55.541 [2024-07-15 18:17:46.907361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:55.541 [2024-07-15 18:17:46.907401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:55.541 passed 00:03:55.541 Test: dif_disable_check_test ...[2024-07-15 18:17:46.907440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:55.541 [2024-07-15 18:17:46.907479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:55.541 [2024-07-15 18:17:46.907612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:55.541 [2024-07-15 18:17:46.907651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:55.541 passed 00:03:55.541 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-15 18:17:46.907690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:55.541 [2024-07-15 18:17:46.907821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:03:55.541 [2024-07-15 18:17:46.907861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:03:55.541 [2024-07-15 18:17:46.907901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:03:55.541 [2024-07-15 18:17:46.907940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:03:55.541 [2024-07-15 18:17:46.907979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:55.541 [2024-07-15 18:17:46.908018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:55.541 [2024-07-15 18:17:46.908067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:55.541 [2024-07-15 18:17:46.908117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:55.541 [2024-07-15 18:17:46.908156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:55.541 [2024-07-15 18:17:46.908195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:55.541 passed 00:03:55.542 Test: dif_apptag_mask_test ...passed 00:03:55.542 Test: dif_sec_512_md_0_error_test ...passed 00:03:55.542 Test: dif_sec_4096_md_0_error_test ...passed[2024-07-15 18:17:46.908366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:55.542 [2024-07-15 18:17:46.908421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:55.542 [2024-07-15 18:17:46.908460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:55.542 [2024-07-15 18:17:46.908485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:55.542 [2024-07-15 18:17:46.908496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:55.542 [2024-07-15 18:17:46.908505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:55.542 00:03:55.542 Test: dif_sec_4100_md_128_error_test ...passed 00:03:55.542 Test: dif_guard_seed_test ...passed 00:03:55.542 Test: dif_guard_value_test ...passed 00:03:55.542 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:03:55.542 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:55.542 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...[2024-07-15 18:17:46.908530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:55.542 [2024-07-15 18:17:46.908541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:55.542 passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:03:55.542 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:55.542 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:55.542 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:03:55.542 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:55.542 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:55.542 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:55.542 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 18:17:46.915028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:03:55.542 [2024-07-15 18:17:46.915434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=be21, Actual=fe21 00:03:55.542 [2024-07-15 18:17:46.915866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.916269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.916586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.542 [2024-07-15 18:17:46.916971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.542 [2024-07-15 18:17:46.917288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7a4a 00:03:55.542 [2024-07-15 18:17:46.917992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fe21, Actual=e5ce 00:03:55.542 [2024-07-15 18:17:46.918293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5ab753ed, Actual=1ab753ed 00:03:55.542 [2024-07-15 18:17:46.918641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=78574660, Actual=38574660 00:03:55.542 [2024-07-15 18:17:46.918957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.919364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.919743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.542 [2024-07-15 18:17:46.920220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.542 [2024-07-15 18:17:46.920636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a97ce590 00:03:55.542 [2024-07-15 18:17:46.920837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=38574660, Actual=486edf56 00:03:55.542 [2024-07-15 18:17:46.921110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.542 [2024-07-15 18:17:46.921459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:03:55.542 [2024-07-15 18:17:46.922295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.922613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.923023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.542 [2024-07-15 18:17:46.923429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.542 [2024-07-15 18:17:46.923849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.542 [2024-07-15 18:17:46.924123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=88010a2d4837a266, Actual=145dbb21fffb6e02 00:03:55.542 passed 00:03:55.542 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-15 18:17:46.924189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:03:55.542 [2024-07-15 18:17:46.924232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:03:55.542 [2024-07-15 18:17:46.924273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.924315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.924360] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.542 [2024-07-15 18:17:46.924488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.542 [2024-07-15 18:17:46.924532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a4a 00:03:55.542 [2024-07-15 18:17:46.924567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e5ce 00:03:55.542 [2024-07-15 18:17:46.924604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:03:55.542 [2024-07-15 18:17:46.924655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:03:55.542 [2024-07-15 18:17:46.924697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.924738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.924780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.542 [2024-07-15 18:17:46.924821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.542 [2024-07-15 18:17:46.924862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a97ce590 00:03:55.542 [2024-07-15 18:17:46.924898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=486edf56 00:03:55.542 [2024-07-15 18:17:46.925030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.542 [2024-07-15 18:17:46.925074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:03:55.542 [2024-07-15 18:17:46.925116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.925157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.925199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.542 passed 00:03:55.542 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-15 18:17:46.925241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.542 [2024-07-15 18:17:46.925282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.542 [2024-07-15 18:17:46.925317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=145dbb21fffb6e02 00:03:55.542 [2024-07-15 18:17:46.925357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:03:55.542 [2024-07-15 18:17:46.925398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:03:55.542 [2024-07-15 18:17:46.925440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.925481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.925640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.542 [2024-07-15 18:17:46.925685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.542 [2024-07-15 18:17:46.925727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a4a 00:03:55.542 [2024-07-15 18:17:46.925762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e5ce 00:03:55.542 [2024-07-15 18:17:46.925798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:03:55.542 [2024-07-15 18:17:46.925840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:03:55.542 [2024-07-15 18:17:46.925881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.925927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.542 [2024-07-15 18:17:46.925973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.543 [2024-07-15 18:17:46.926018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.543 [2024-07-15 18:17:46.926064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a97ce590 00:03:55.543 [2024-07-15 18:17:46.926103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=486edf56 00:03:55.543 [2024-07-15 18:17:46.926143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.543 [2024-07-15 18:17:46.926188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:03:55.543 [2024-07-15 18:17:46.926234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.926279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.926324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.926462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.926513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.543 [2024-07-15 18:17:46.926552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=145dbb21fffb6e02 00:03:55.543 passed 00:03:55.543 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-15 18:17:46.926596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:03:55.543 [2024-07-15 18:17:46.926638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:03:55.543 [2024-07-15 18:17:46.926683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.926728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.926774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.926820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.926866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a4a 00:03:55.543 [2024-07-15 18:17:46.926905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e5ce 00:03:55.543 [2024-07-15 18:17:46.926945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:03:55.543 [2024-07-15 18:17:46.926997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:03:55.543 [2024-07-15 18:17:46.927043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.927088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.927133] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.543 [2024-07-15 18:17:46.927178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.543 [2024-07-15 18:17:46.927224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a97ce590 00:03:55.543 [2024-07-15 18:17:46.927264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=486edf56 00:03:55.543 [2024-07-15 18:17:46.927304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.543 [2024-07-15 18:17:46.927349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:03:55.543 [2024-07-15 18:17:46.927520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.927570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.927617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.927662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.927704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.543 [2024-07-15 18:17:46.927743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=145dbb21fffb6e02 00:03:55.543 passed 00:03:55.543 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-15 18:17:46.927786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:03:55.543 [2024-07-15 18:17:46.927828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:03:55.543 [2024-07-15 18:17:46.927874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.927919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.927965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.928010] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.928064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a4a 00:03:55.543 [2024-07-15 18:17:46.928104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e5ce 00:03:55.543 passed 00:03:55.543 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-15 18:17:46.928152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:03:55.543 [2024-07-15 18:17:46.928198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:03:55.543 [2024-07-15 18:17:46.928251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.930064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.930123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.543 [2024-07-15 18:17:46.930166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.543 [2024-07-15 18:17:46.930207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a97ce590 00:03:55.543 [2024-07-15 18:17:46.930243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=486edf56 00:03:55.543 [2024-07-15 18:17:46.930280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.543 [2024-07-15 18:17:46.930322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:03:55.543 [2024-07-15 18:17:46.930363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.930405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.930446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.930609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 passed 00:03:55.543 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-15 18:17:46.930655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.543 [2024-07-15 18:17:46.930690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=145dbb21fffb6e02 00:03:55.543 [2024-07-15 18:17:46.930730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:03:55.543 [2024-07-15 18:17:46.930776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:03:55.543 [2024-07-15 18:17:46.930829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.930873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.930915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.930965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.543 [2024-07-15 18:17:46.931007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a4a 00:03:55.543 [2024-07-15 18:17:46.931047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e5ce 00:03:55.543 passed 00:03:55.543 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-15 18:17:46.931087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:03:55.543 [2024-07-15 18:17:46.931128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:03:55.543 [2024-07-15 18:17:46.931174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.931386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.931438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.543 [2024-07-15 18:17:46.931480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:03:55.543 [2024-07-15 18:17:46.931526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a97ce590 00:03:55.543 [2024-07-15 18:17:46.931565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=486edf56 00:03:55.543 [2024-07-15 18:17:46.931607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.543 [2024-07-15 18:17:46.931652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:03:55.543 [2024-07-15 18:17:46.931697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.543 [2024-07-15 18:17:46.931743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.931788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.544 [2024-07-15 18:17:46.931833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:03:55.544 [2024-07-15 18:17:46.931878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.544 [2024-07-15 18:17:46.931917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=145dbb21fffb6e02 00:03:55.544 passed 00:03:55.544 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:03:55.544 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:55.544 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:55.544 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:55.544 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:55.544 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:55.544 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:55.544 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:55.544 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:55.544 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 18:17:46.938966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:03:55.544 [2024-07-15 18:17:46.939194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5291, Actual=1291 00:03:55.544 [2024-07-15 18:17:46.939568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.939754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.940077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.940269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.940512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7a4a 00:03:55.544 [2024-07-15 18:17:46.940703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=892f 00:03:55.544 [2024-07-15 18:17:46.940874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5ab753ed, Actual=1ab753ed 00:03:55.544 [2024-07-15 18:17:46.941099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=49410607, Actual=9410607 00:03:55.544 [2024-07-15 18:17:46.941278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.941457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.941757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.544 [2024-07-15 18:17:46.942290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.544 [2024-07-15 18:17:46.942488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a97ce590 00:03:55.544 [2024-07-15 18:17:46.942767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=eb84c95b 00:03:55.544 [2024-07-15 18:17:46.942946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.544 [2024-07-15 18:17:46.943220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cab03f35efd49a94, Actual=8ab03f35efd49a94 00:03:55.544 [2024-07-15 18:17:46.943400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.943694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.943999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.944200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.944372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.544 [2024-07-15 18:17:46.944542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=d4745167fe88f90b 00:03:55.544 passed 00:03:55.544 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 18:17:46.944703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:03:55.544 [2024-07-15 18:17:46.944764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5291, Actual=1291 00:03:55.544 [2024-07-15 18:17:46.944820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.944874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.944919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.944973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.945021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7a4a 00:03:55.544 [2024-07-15 18:17:46.945069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=892f 00:03:55.544 [2024-07-15 18:17:46.945118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5ab753ed, Actual=1ab753ed 00:03:55.544 [2024-07-15 18:17:46.945455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=49410607, Actual=9410607 00:03:55.544 [2024-07-15 18:17:46.945527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.945593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.945650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.544 [2024-07-15 18:17:46.945699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.544 [2024-07-15 18:17:46.945747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a97ce590 00:03:55.544 [2024-07-15 18:17:46.945795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=eb84c95b 00:03:55.544 [2024-07-15 18:17:46.945937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.544 [2024-07-15 18:17:46.946004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cab03f35efd49a94, Actual=8ab03f35efd49a94 00:03:55.544 [2024-07-15 18:17:46.946069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.946128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.946177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.946225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.946274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.544 [2024-07-15 18:17:46.946321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=d4745167fe88f90b 00:03:55.544 passed 00:03:55.544 Test: dix_sec_512_md_0_error ...passed 00:03:55.544 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-15 18:17:46.946338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:55.544 passed 00:03:55.544 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:55.544 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:55.544 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:55.544 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:55.544 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:55.544 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:55.544 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:55.544 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:55.544 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 18:17:46.953086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:03:55.544 [2024-07-15 18:17:46.953446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5291, Actual=1291 00:03:55.544 [2024-07-15 18:17:46.953633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.953801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.953968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.954135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.544 [2024-07-15 18:17:46.954367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7a4a 00:03:55.544 [2024-07-15 18:17:46.954540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=892f 00:03:55.544 [2024-07-15 18:17:46.954720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5ab753ed, Actual=1ab753ed 00:03:55.544 [2024-07-15 18:17:46.954956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=49410607, Actual=9410607 00:03:55.544 [2024-07-15 18:17:46.955130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.955431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.544 [2024-07-15 18:17:46.955630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.544 [2024-07-15 18:17:46.955924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.544 [2024-07-15 18:17:46.956108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a97ce590 00:03:55.544 [2024-07-15 18:17:46.956284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=eb84c95b 00:03:55.544 [2024-07-15 18:17:46.956455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.544 [2024-07-15 18:17:46.956718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cab03f35efd49a94, Actual=8ab03f35efd49a94 00:03:55.544 [2024-07-15 18:17:46.956892] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.545 [2024-07-15 18:17:46.957065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.545 [2024-07-15 18:17:46.957268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.545 [2024-07-15 18:17:46.957442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.545 [2024-07-15 18:17:46.957725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.545 passed 00:03:55.545 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 18:17:46.957916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=d4745167fe88f90b 00:03:55.545 [2024-07-15 18:17:46.957992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:03:55.545 [2024-07-15 18:17:46.958060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5291, Actual=1291 00:03:55.545 [2024-07-15 18:17:46.958131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.545 [2024-07-15 18:17:46.958322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.545 [2024-07-15 18:17:46.958389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.545 [2024-07-15 18:17:46.958452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.545 [2024-07-15 18:17:46.958519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7a4a 00:03:55.545 [2024-07-15 18:17:46.958584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=892f 00:03:55.545 [2024-07-15 18:17:46.958644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5ab753ed, Actual=1ab753ed 00:03:55.545 [2024-07-15 18:17:46.958711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=49410607, Actual=9410607 00:03:55.545 [2024-07-15 18:17:46.958775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.545 [2024-07-15 18:17:46.959018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.545 [2024-07-15 18:17:46.959070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.545 [2024-07-15 18:17:46.959126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:03:55.545 [2024-07-15 18:17:46.959183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a97ce590 00:03:55.545 [2024-07-15 18:17:46.959237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=eb84c95b 00:03:55.545 [2024-07-15 18:17:46.959291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:55.545 [2024-07-15 18:17:46.959336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cab03f35efd49a94, Actual=8ab03f35efd49a94 00:03:55.545 [2024-07-15 18:17:46.959379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.545 [2024-07-15 18:17:46.959424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:03:55.545 [2024-07-15 18:17:46.959469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.545 passed 00:03:55.545 Test: set_md_interleave_iovs_test ...[2024-07-15 18:17:46.959512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:03:55.545 [2024-07-15 18:17:46.959555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=1b2efd87ff68afdb 00:03:55.545 [2024-07-15 18:17:46.959599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=d4745167fe88f90b 00:03:55.545 passed 00:03:55.545 Test: set_md_interleave_iovs_split_test ...passed 00:03:55.545 Test: dif_generate_stream_pi_16_test ...passed 00:03:55.545 Test: dif_generate_stream_test ...passed 00:03:55.545 Test: set_md_interleave_iovs_alignment_test ...[2024-07-15 18:17:46.960628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:03:55.545 passed 00:03:55.545 Test: dif_generate_split_test ...passed 00:03:55.545 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:03:55.545 Test: dif_verify_split_test ...passed 00:03:55.545 Test: dif_verify_stream_multi_segments_test ...passed 00:03:55.545 Test: update_crc32c_pi_16_test ...passed 00:03:55.545 Test: update_crc32c_test ...passed 00:03:55.545 Test: dif_update_crc32c_split_test ...passed 00:03:55.545 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:03:55.545 Test: get_range_with_md_test ...passed 00:03:55.545 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:03:55.545 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:03:55.545 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:55.545 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:03:55.545 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:03:55.545 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:55.545 Test: dif_generate_and_verify_unmap_test ...passed 00:03:55.545 00:03:55.545 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.545 suites 1 1 n/a 0 0 00:03:55.545 tests 79 79 79 0 0 00:03:55.545 asserts 3584 3584 3584 0 n/a 00:03:55.545 00:03:55.545 Elapsed time = 0.062 seconds 00:03:55.545 18:17:46 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:03:55.545 00:03:55.545 00:03:55.545 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.545 http://cunit.sourceforge.net/ 00:03:55.545 00:03:55.545 00:03:55.545 Suite: iov 00:03:55.545 Test: test_single_iov ...passed 00:03:55.545 Test: test_simple_iov ...passed 00:03:55.545 Test: test_complex_iov ...passed 00:03:55.545 Test: test_iovs_to_buf ...passed 00:03:55.545 Test: test_buf_to_iovs ...passed 00:03:55.545 Test: test_memset ...passed 00:03:55.545 Test: test_iov_one ...passed 00:03:55.545 Test: test_iov_xfer ...passed 00:03:55.545 00:03:55.545 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.545 suites 1 1 n/a 0 0 00:03:55.545 tests 8 8 8 0 0 00:03:55.545 asserts 156 156 156 0 n/a 00:03:55.545 00:03:55.545 Elapsed time = 0.000 seconds 00:03:55.545 18:17:46 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:03:55.545 00:03:55.545 00:03:55.545 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.545 http://cunit.sourceforge.net/ 00:03:55.545 00:03:55.545 00:03:55.545 Suite: math 00:03:55.545 Test: test_serial_number_arithmetic ...passed 00:03:55.545 Suite: erase 00:03:55.545 Test: test_memset_s ...passed 00:03:55.545 00:03:55.545 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.545 suites 2 2 n/a 0 0 00:03:55.545 tests 2 2 2 0 0 00:03:55.545 asserts 18 18 18 0 n/a 00:03:55.545 00:03:55.545 Elapsed time = 0.000 seconds 00:03:55.545 18:17:46 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:03:55.545 00:03:55.545 00:03:55.545 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.545 http://cunit.sourceforge.net/ 00:03:55.545 00:03:55.545 00:03:55.545 Suite: pipe 00:03:55.545 Test: test_create_destroy ...passed 00:03:55.545 Test: test_write_get_buffer ...passed 00:03:55.545 Test: test_write_advance ...passed 00:03:55.545 Test: test_read_get_buffer ...passed 00:03:55.545 Test: test_read_advance ...passed 00:03:55.545 Test: test_data ...passed 00:03:55.545 00:03:55.545 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.545 suites 1 1 n/a 0 0 00:03:55.545 tests 6 6 6 0 0 00:03:55.545 asserts 251 251 251 0 n/a 00:03:55.545 00:03:55.545 Elapsed time = 0.000 seconds 00:03:55.545 18:17:46 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:03:55.545 00:03:55.545 00:03:55.545 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.545 http://cunit.sourceforge.net/ 00:03:55.545 00:03:55.545 00:03:55.545 Suite: xor 00:03:55.545 Test: test_xor_gen ...passed 00:03:55.545 00:03:55.545 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.545 suites 1 1 n/a 0 0 00:03:55.545 tests 1 1 1 0 0 00:03:55.545 asserts 17 17 17 0 n/a 00:03:55.545 00:03:55.546 Elapsed time = 0.000 seconds 00:03:55.546 00:03:55.546 real 0m0.134s 00:03:55.546 user 0m0.086s 00:03:55.546 sys 0m0.048s 00:03:55.546 18:17:46 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.546 ************************************ 00:03:55.546 END TEST unittest_util 00:03:55.546 ************************************ 00:03:55.546 18:17:46 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.546 18:17:47 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:55.546 18:17:47 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 START TEST unittest_dma 00:03:55.546 ************************************ 00:03:55.546 18:17:47 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:55.546 00:03:55.546 00:03:55.546 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.546 http://cunit.sourceforge.net/ 00:03:55.546 00:03:55.546 00:03:55.546 Suite: dma_suite 00:03:55.546 Test: test_dma ...passed 00:03:55.546 00:03:55.546 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.546 suites 1 1 n/a 0 0 00:03:55.546 tests 1 1 1 0 0 00:03:55.546 asserts 54 54 54 0 n/a 00:03:55.546 00:03:55.546 Elapsed time = 0.000 seconds 00:03:55.546 [2024-07-15 18:17:47.041932] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:03:55.546 00:03:55.546 real 0m0.006s 00:03:55.546 user 0m0.005s 00:03:55.546 sys 0m0.004s 00:03:55.546 18:17:47 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.546 18:17:47 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 END TEST unittest_dma 00:03:55.546 ************************************ 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.546 18:17:47 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 START TEST unittest_init 00:03:55.546 ************************************ 00:03:55.546 18:17:47 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:03:55.546 18:17:47 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:03:55.546 00:03:55.546 00:03:55.546 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.546 http://cunit.sourceforge.net/ 00:03:55.546 00:03:55.546 00:03:55.546 Suite: subsystem_suite 00:03:55.546 Test: subsystem_sort_test_depends_on_single ...passed 00:03:55.546 Test: subsystem_sort_test_depends_on_multiple ...passed 00:03:55.546 Test: subsystem_sort_test_missing_dependency ...passed 00:03:55.546 00:03:55.546 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.546 suites 1 1 n/a 0 0 00:03:55.546 tests 3 3 3 0 0 00:03:55.546 asserts 20 20 20 0 n/a 00:03:55.546 00:03:55.546 Elapsed time = 0.000 seconds 00:03:55.546 [2024-07-15 18:17:47.084474] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:03:55.546 [2024-07-15 18:17:47.084699] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:03:55.546 00:03:55.546 real 0m0.006s 00:03:55.546 user 0m0.000s 00:03:55.546 sys 0m0.008s 00:03:55.546 18:17:47 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.546 18:17:47 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 END TEST unittest_init 00:03:55.546 ************************************ 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.546 18:17:47 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 START TEST unittest_keyring 00:03:55.546 ************************************ 00:03:55.546 18:17:47 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:55.546 00:03:55.546 00:03:55.546 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.546 http://cunit.sourceforge.net/ 00:03:55.546 00:03:55.546 00:03:55.546 Suite: keyring 00:03:55.546 Test: test_keyring_add_remove ...[2024-07-15 18:17:47.128384] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:03:55.546 [2024-07-15 18:17:47.128636] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:03:55.546 passed 00:03:55.546 Test: test_keyring_get_put ...passed 00:03:55.546 [2024-07-15 18:17:47.128659] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:03:55.546 00:03:55.546 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.546 suites 1 1 n/a 0 0 00:03:55.546 tests 2 2 2 0 0 00:03:55.546 asserts 44 44 44 0 n/a 00:03:55.546 00:03:55.546 Elapsed time = 0.000 seconds 00:03:55.546 00:03:55.546 real 0m0.006s 00:03:55.546 user 0m0.005s 00:03:55.546 sys 0m0.005s 00:03:55.546 18:17:47 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.546 18:17:47 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 END TEST unittest_keyring 00:03:55.546 ************************************ 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:55.546 00:03:55.546 00:03:55.546 18:17:47 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:03:55.546 18:17:47 unittest -- unit/unittest.sh@305 -- # set +x 00:03:55.546 ===================== 00:03:55.546 All unit tests passed 00:03:55.546 ===================== 00:03:55.546 WARN: lcov not installed or SPDK built without coverage! 00:03:55.546 WARN: neither valgrind nor ASAN is enabled! 00:03:55.546 00:03:55.546 00:03:55.546 00:03:55.546 real 0m32.366s 00:03:55.546 user 0m14.456s 00:03:55.546 sys 0m1.284s 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.546 ************************************ 00:03:55.546 END TEST unittest 00:03:55.546 ************************************ 00:03:55.546 18:17:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 18:17:47 -- common/autotest_common.sh@1142 -- # return 0 00:03:55.546 18:17:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:55.546 18:17:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:55.546 18:17:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:55.546 18:17:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:55.546 18:17:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:55.546 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 18:17:47 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:55.546 18:17:47 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.546 18:17:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.546 18:17:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.546 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 START TEST env 00:03:55.546 ************************************ 00:03:55.546 18:17:47 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.546 * Looking for test storage... 00:03:55.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:55.546 18:17:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:55.546 18:17:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.546 18:17:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.546 18:17:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 START TEST env_memory 00:03:55.546 ************************************ 00:03:55.546 18:17:47 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:55.546 00:03:55.546 00:03:55.546 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.546 http://cunit.sourceforge.net/ 00:03:55.546 00:03:55.546 00:03:55.546 Suite: memory 00:03:55.546 Test: alloc and free memory map ...[2024-07-15 18:17:47.434283] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:55.546 passed 00:03:55.546 Test: mem map translation ...[2024-07-15 18:17:47.441227] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:55.546 [2024-07-15 18:17:47.441263] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:55.546 [2024-07-15 18:17:47.441295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:55.546 [2024-07-15 18:17:47.441307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:55.546 passed 00:03:55.546 Test: mem map registration ...[2024-07-15 18:17:47.449931] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:55.546 [2024-07-15 18:17:47.449961] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:55.546 passed 00:03:55.546 Test: mem map adjacent registrations ...passed 00:03:55.546 00:03:55.546 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.546 suites 1 1 n/a 0 0 00:03:55.546 tests 4 4 4 0 0 00:03:55.546 asserts 152 152 152 0 n/a 00:03:55.546 00:03:55.546 Elapsed time = 0.039 seconds 00:03:55.546 00:03:55.546 real 0m0.041s 00:03:55.546 user 0m0.042s 00:03:55.547 sys 0m0.003s 00:03:55.547 18:17:47 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.547 18:17:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:55.547 ************************************ 00:03:55.547 END TEST env_memory 00:03:55.547 ************************************ 00:03:55.547 18:17:47 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.547 18:17:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:55.547 18:17:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.547 18:17:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.547 18:17:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.547 ************************************ 00:03:55.547 START TEST env_vtophys 00:03:55.547 ************************************ 00:03:55.547 18:17:47 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:55.547 EAL: lib.eal log level changed from notice to debug 00:03:55.547 EAL: Sysctl reports 10 cpus 00:03:55.547 EAL: Detected lcore 0 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 1 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 2 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 3 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 4 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 5 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 6 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 7 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 8 as core 0 on socket 0 00:03:55.547 EAL: Detected lcore 9 as core 0 on socket 0 00:03:55.547 EAL: Maximum logical cores by configuration: 128 00:03:55.547 EAL: Detected CPU lcores: 10 00:03:55.547 EAL: Detected NUMA nodes: 1 00:03:55.547 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:55.547 EAL: Checking presence of .so 'librte_eal.so.24' 00:03:55.547 EAL: Checking presence of .so 'librte_eal.so' 00:03:55.547 EAL: Detected static linkage of DPDK 00:03:55.547 EAL: No shared files mode enabled, IPC will be disabled 00:03:55.547 EAL: PCI scan found 10 devices 00:03:55.547 EAL: Specific IOVA mode is not requested, autodetecting 00:03:55.547 EAL: Selecting IOVA mode according to bus requests 00:03:55.547 EAL: Bus pci wants IOVA as 'PA' 00:03:55.547 EAL: Selected IOVA mode 'PA' 00:03:55.547 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:55.547 EAL: Ask a virtual area of 0x2e000 bytes 00:03:55.547 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x1000109000) not respected! 00:03:55.547 EAL: This may cause issues with mapping memory into secondary processes 00:03:55.547 EAL: Virtual area found at 0x1000109000 (size = 0x2e000) 00:03:55.547 EAL: Setting up physically contiguous memory... 00:03:55.547 EAL: Ask a virtual area of 0x1000 bytes 00:03:55.547 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x10006f4000) not respected! 00:03:55.547 EAL: This may cause issues with mapping memory into secondary processes 00:03:55.547 EAL: Virtual area found at 0x10006f4000 (size = 0x1000) 00:03:55.547 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:03:55.547 EAL: Ask a virtual area of 0xf0000000 bytes 00:03:55.547 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:03:55.547 EAL: This may cause issues with mapping memory into secondary processes 00:03:55.547 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:03:55.547 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:03:55.547 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x20000000, len 268435456 00:03:55.547 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x30000000, len 268435456 00:03:55.547 EAL: Mapped memory segment 2 @ 0x1090000000: physaddr:0x60000000, len 268435456 00:03:55.547 EAL: Mapped memory segment 3 @ 0x1080000000: physaddr:0x70000000, len 268435456 00:03:55.806 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x80000000, len 268435456 00:03:55.806 EAL: Mapped memory segment 5 @ 0x10c0000000: physaddr:0xa0000000, len 268435456 00:03:55.806 EAL: Mapped memory segment 6 @ 0x10e0000000: physaddr:0x150000000, len 268435456 00:03:55.806 EAL: Mapped memory segment 7 @ 0x1100000000: physaddr:0x230000000, len 268435456 00:03:55.806 EAL: No shared files mode enabled, IPC is disabled 00:03:55.806 EAL: Added 1280M to heap on socket 0 00:03:55.806 EAL: Added 256M to heap on socket 0 00:03:55.806 EAL: Added 256M to heap on socket 0 00:03:55.806 EAL: Added 256M to heap on socket 0 00:03:55.806 EAL: TSC is not safe to use in SMP mode 00:03:55.806 EAL: TSC is not invariant 00:03:55.806 EAL: TSC frequency is ~2200002 KHz 00:03:55.806 EAL: Main lcore 0 is ready (tid=339768212000;cpuset=[0]) 00:03:55.806 EAL: PCI scan found 10 devices 00:03:55.806 EAL: Registering mem event callbacks not supported 00:03:55.806 00:03:55.806 00:03:55.806 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.806 http://cunit.sourceforge.net/ 00:03:55.806 00:03:55.806 00:03:55.806 Suite: components_suite 00:03:55.806 Test: vtophys_malloc_test ...passed 00:03:56.374 Test: vtophys_spdk_malloc_test ...passed 00:03:56.374 00:03:56.374 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.374 suites 1 1 n/a 0 0 00:03:56.374 tests 2 2 2 0 0 00:03:56.374 asserts 521 521 521 0 n/a 00:03:56.374 00:03:56.374 Elapsed time = 0.484 seconds 00:03:56.374 00:03:56.374 real 0m1.179s 00:03:56.374 user 0m0.485s 00:03:56.374 sys 0m0.694s 00:03:56.374 18:17:48 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.374 18:17:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:56.374 ************************************ 00:03:56.374 END TEST env_vtophys 00:03:56.374 ************************************ 00:03:56.374 18:17:48 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.374 18:17:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:56.374 18:17:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.374 18:17:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.374 18:17:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.374 ************************************ 00:03:56.374 START TEST env_pci 00:03:56.374 ************************************ 00:03:56.374 18:17:48 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:56.374 00:03:56.374 00:03:56.374 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.374 http://cunit.sourceforge.net/ 00:03:56.374 00:03:56.374 00:03:56.374 Suite: pci 00:03:56.374 Test: pci_hook ...passed 00:03:56.374 00:03:56.374 EAL: Cannot find device (10000:00:01.0) 00:03:56.374 EAL: Failed to attach device on primary process 00:03:56.374 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.374 suites 1 1 n/a 0 0 00:03:56.374 tests 1 1 1 0 0 00:03:56.374 asserts 25 25 25 0 n/a 00:03:56.374 00:03:56.374 Elapsed time = 0.000 seconds 00:03:56.374 00:03:56.374 real 0m0.007s 00:03:56.374 user 0m0.007s 00:03:56.374 sys 0m0.000s 00:03:56.374 18:17:48 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.374 ************************************ 00:03:56.374 END TEST env_pci 00:03:56.374 ************************************ 00:03:56.374 18:17:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:56.632 18:17:48 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.632 18:17:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:56.632 18:17:48 env -- env/env.sh@15 -- # uname 00:03:56.632 18:17:48 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:03:56.632 18:17:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:56.632 18:17:48 env -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:03:56.632 18:17:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.632 18:17:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.632 ************************************ 00:03:56.632 START TEST env_dpdk_post_init 00:03:56.632 ************************************ 00:03:56.632 18:17:48 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:56.632 EAL: Sysctl reports 10 cpus 00:03:56.632 EAL: Detected CPU lcores: 10 00:03:56.632 EAL: Detected NUMA nodes: 1 00:03:56.632 EAL: Detected static linkage of DPDK 00:03:56.632 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.632 EAL: Selected IOVA mode 'PA' 00:03:56.632 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:56.632 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x20000000, len 268435456 00:03:56.632 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x30000000, len 268435456 00:03:56.890 EAL: Mapped memory segment 2 @ 0x1090000000: physaddr:0x60000000, len 268435456 00:03:56.890 EAL: Mapped memory segment 3 @ 0x1080000000: physaddr:0x70000000, len 268435456 00:03:56.890 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x80000000, len 268435456 00:03:56.890 EAL: Mapped memory segment 5 @ 0x10c0000000: physaddr:0xa0000000, len 268435456 00:03:57.148 EAL: Mapped memory segment 6 @ 0x10e0000000: physaddr:0x150000000, len 268435456 00:03:57.148 EAL: Mapped memory segment 7 @ 0x1100000000: physaddr:0x230000000, len 268435456 00:03:57.148 EAL: TSC is not safe to use in SMP mode 00:03:57.148 EAL: TSC is not invariant 00:03:57.148 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.148 [2024-07-15 18:17:49.401044] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:57.148 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:57.148 Starting DPDK initialization... 00:03:57.148 Starting SPDK post initialization... 00:03:57.148 SPDK NVMe probe 00:03:57.148 Attaching to 0000:00:10.0 00:03:57.148 Attached to 0000:00:10.0 00:03:57.148 Cleaning up... 00:03:57.148 00:03:57.148 real 0m0.690s 00:03:57.148 user 0m0.007s 00:03:57.148 sys 0m0.680s 00:03:57.148 18:17:49 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.148 ************************************ 00:03:57.148 END TEST env_dpdk_post_init 00:03:57.148 18:17:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.148 ************************************ 00:03:57.148 18:17:49 env -- common/autotest_common.sh@1142 -- # return 0 00:03:57.148 18:17:49 env -- env/env.sh@26 -- # uname 00:03:57.148 18:17:49 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:03:57.148 00:03:57.148 real 0m2.269s 00:03:57.148 user 0m0.731s 00:03:57.148 sys 0m1.583s 00:03:57.148 18:17:49 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.148 18:17:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.148 ************************************ 00:03:57.148 END TEST env 00:03:57.148 ************************************ 00:03:57.148 18:17:49 -- common/autotest_common.sh@1142 -- # return 0 00:03:57.148 18:17:49 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:57.148 18:17:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.148 18:17:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.148 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:57.148 ************************************ 00:03:57.148 START TEST rpc 00:03:57.148 ************************************ 00:03:57.148 18:17:49 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:57.407 * Looking for test storage... 00:03:57.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:57.407 18:17:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45499 00:03:57.407 18:17:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:57.407 18:17:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.407 18:17:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45499 00:03:57.407 18:17:49 rpc -- common/autotest_common.sh@829 -- # '[' -z 45499 ']' 00:03:57.407 18:17:49 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.407 18:17:49 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:57.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.407 18:17:49 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.407 18:17:49 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:57.407 18:17:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.407 [2024-07-15 18:17:49.655487] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:03:57.407 [2024-07-15 18:17:49.655685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:57.999 EAL: TSC is not safe to use in SMP mode 00:03:57.999 EAL: TSC is not invariant 00:03:57.999 [2024-07-15 18:17:50.302987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.256 [2024-07-15 18:17:50.423898] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:58.256 [2024-07-15 18:17:50.426429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:58.256 [2024-07-15 18:17:50.426470] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45499' to capture a snapshot of events at runtime. 00:03:58.256 [2024-07-15 18:17:50.426498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.515 18:17:50 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:58.515 18:17:50 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:58.515 18:17:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.515 18:17:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.515 18:17:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:58.515 18:17:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:58.515 18:17:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.515 18:17:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.515 18:17:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.515 ************************************ 00:03:58.515 START TEST rpc_integrity 00:03:58.515 ************************************ 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.515 { 00:03:58.515 "name": "Malloc0", 00:03:58.515 "aliases": [ 00:03:58.515 "8bd80c49-42d6-11ef-9ade-d5fc5159efa5" 00:03:58.515 ], 00:03:58.515 "product_name": "Malloc disk", 00:03:58.515 "block_size": 512, 00:03:58.515 "num_blocks": 16384, 00:03:58.515 "uuid": "8bd80c49-42d6-11ef-9ade-d5fc5159efa5", 00:03:58.515 "assigned_rate_limits": { 00:03:58.515 "rw_ios_per_sec": 0, 00:03:58.515 "rw_mbytes_per_sec": 0, 00:03:58.515 "r_mbytes_per_sec": 0, 00:03:58.515 "w_mbytes_per_sec": 0 00:03:58.515 }, 00:03:58.515 "claimed": false, 00:03:58.515 "zoned": false, 00:03:58.515 "supported_io_types": { 00:03:58.515 "read": true, 00:03:58.515 "write": true, 00:03:58.515 "unmap": true, 00:03:58.515 "flush": true, 00:03:58.515 "reset": true, 00:03:58.515 "nvme_admin": false, 00:03:58.515 "nvme_io": false, 00:03:58.515 "nvme_io_md": false, 00:03:58.515 "write_zeroes": true, 00:03:58.515 "zcopy": true, 00:03:58.515 "get_zone_info": false, 00:03:58.515 "zone_management": false, 00:03:58.515 "zone_append": false, 00:03:58.515 "compare": false, 00:03:58.515 "compare_and_write": false, 00:03:58.515 "abort": true, 00:03:58.515 "seek_hole": false, 00:03:58.515 "seek_data": false, 00:03:58.515 "copy": true, 00:03:58.515 "nvme_iov_md": false 00:03:58.515 }, 00:03:58.515 "memory_domains": [ 00:03:58.515 { 00:03:58.515 "dma_device_id": "system", 00:03:58.515 "dma_device_type": 1 00:03:58.515 }, 00:03:58.515 { 00:03:58.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.515 "dma_device_type": 2 00:03:58.515 } 00:03:58.515 ], 00:03:58.515 "driver_specific": {} 00:03:58.515 } 00:03:58.515 ]' 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.515 [2024-07-15 18:17:50.868110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:58.515 [2024-07-15 18:17:50.868177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.515 [2024-07-15 18:17:50.869032] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b7487037a00 00:03:58.515 [2024-07-15 18:17:50.869062] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.515 [2024-07-15 18:17:50.870052] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.515 [2024-07-15 18:17:50.870087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.515 Passthru0 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.515 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.515 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.773 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.773 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.773 { 00:03:58.773 "name": "Malloc0", 00:03:58.773 "aliases": [ 00:03:58.773 "8bd80c49-42d6-11ef-9ade-d5fc5159efa5" 00:03:58.773 ], 00:03:58.773 "product_name": "Malloc disk", 00:03:58.773 "block_size": 512, 00:03:58.773 "num_blocks": 16384, 00:03:58.773 "uuid": "8bd80c49-42d6-11ef-9ade-d5fc5159efa5", 00:03:58.773 "assigned_rate_limits": { 00:03:58.773 "rw_ios_per_sec": 0, 00:03:58.773 "rw_mbytes_per_sec": 0, 00:03:58.773 "r_mbytes_per_sec": 0, 00:03:58.773 "w_mbytes_per_sec": 0 00:03:58.773 }, 00:03:58.773 "claimed": true, 00:03:58.773 "claim_type": "exclusive_write", 00:03:58.773 "zoned": false, 00:03:58.773 "supported_io_types": { 00:03:58.773 "read": true, 00:03:58.773 "write": true, 00:03:58.773 "unmap": true, 00:03:58.773 "flush": true, 00:03:58.773 "reset": true, 00:03:58.773 "nvme_admin": false, 00:03:58.773 "nvme_io": false, 00:03:58.773 "nvme_io_md": false, 00:03:58.773 "write_zeroes": true, 00:03:58.773 "zcopy": true, 00:03:58.773 "get_zone_info": false, 00:03:58.773 "zone_management": false, 00:03:58.773 "zone_append": false, 00:03:58.773 "compare": false, 00:03:58.773 "compare_and_write": false, 00:03:58.773 "abort": true, 00:03:58.773 "seek_hole": false, 00:03:58.773 "seek_data": false, 00:03:58.773 "copy": true, 00:03:58.773 "nvme_iov_md": false 00:03:58.773 }, 00:03:58.773 "memory_domains": [ 00:03:58.773 { 00:03:58.773 "dma_device_id": "system", 00:03:58.773 "dma_device_type": 1 00:03:58.773 }, 00:03:58.773 { 00:03:58.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.773 "dma_device_type": 2 00:03:58.773 } 00:03:58.773 ], 00:03:58.773 "driver_specific": {} 00:03:58.773 }, 00:03:58.773 { 00:03:58.773 "name": "Passthru0", 00:03:58.773 "aliases": [ 00:03:58.773 "a395bdb0-a97c-9459-bea0-8fdedd19c033" 00:03:58.773 ], 00:03:58.773 "product_name": "passthru", 00:03:58.774 "block_size": 512, 00:03:58.774 "num_blocks": 16384, 00:03:58.774 "uuid": "a395bdb0-a97c-9459-bea0-8fdedd19c033", 00:03:58.774 "assigned_rate_limits": { 00:03:58.774 "rw_ios_per_sec": 0, 00:03:58.774 "rw_mbytes_per_sec": 0, 00:03:58.774 "r_mbytes_per_sec": 0, 00:03:58.774 "w_mbytes_per_sec": 0 00:03:58.774 }, 00:03:58.774 "claimed": false, 00:03:58.774 "zoned": false, 00:03:58.774 "supported_io_types": { 00:03:58.774 "read": true, 00:03:58.774 "write": true, 00:03:58.774 "unmap": true, 00:03:58.774 "flush": true, 00:03:58.774 "reset": true, 00:03:58.774 "nvme_admin": false, 00:03:58.774 "nvme_io": false, 00:03:58.774 "nvme_io_md": false, 00:03:58.774 "write_zeroes": true, 00:03:58.774 "zcopy": true, 00:03:58.774 "get_zone_info": false, 00:03:58.774 "zone_management": false, 00:03:58.774 "zone_append": false, 00:03:58.774 "compare": false, 00:03:58.774 "compare_and_write": false, 00:03:58.774 "abort": true, 00:03:58.774 "seek_hole": false, 00:03:58.774 "seek_data": false, 00:03:58.774 "copy": true, 00:03:58.774 "nvme_iov_md": false 00:03:58.774 }, 00:03:58.774 "memory_domains": [ 00:03:58.774 { 00:03:58.774 "dma_device_id": "system", 00:03:58.774 "dma_device_type": 1 00:03:58.774 }, 00:03:58.774 { 00:03:58.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.774 "dma_device_type": 2 00:03:58.774 } 00:03:58.774 ], 00:03:58.774 "driver_specific": { 00:03:58.774 "passthru": { 00:03:58.774 "name": "Passthru0", 00:03:58.774 "base_bdev_name": "Malloc0" 00:03:58.774 } 00:03:58.774 } 00:03:58.774 } 00:03:58.774 ]' 00:03:58.774 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:58.774 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.774 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.774 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.774 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.774 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.774 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.774 18:17:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.774 00:03:58.774 real 0m0.108s 00:03:58.774 user 0m0.041s 00:03:58.774 sys 0m0.011s 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.774 ************************************ 00:03:58.774 END TEST rpc_integrity 00:03:58.774 ************************************ 00:03:58.774 18:17:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.774 18:17:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:58.774 18:17:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.774 18:17:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.774 18:17:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 ************************************ 00:03:58.774 START TEST rpc_plugins 00:03:58.774 ************************************ 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:58.774 18:17:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.774 18:17:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:58.774 18:17:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.774 18:17:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:58.774 { 00:03:58.774 "name": "Malloc1", 00:03:58.774 "aliases": [ 00:03:58.774 "8beb93e8-42d6-11ef-9ade-d5fc5159efa5" 00:03:58.774 ], 00:03:58.774 "product_name": "Malloc disk", 00:03:58.774 "block_size": 4096, 00:03:58.774 "num_blocks": 256, 00:03:58.774 "uuid": "8beb93e8-42d6-11ef-9ade-d5fc5159efa5", 00:03:58.774 "assigned_rate_limits": { 00:03:58.774 "rw_ios_per_sec": 0, 00:03:58.774 "rw_mbytes_per_sec": 0, 00:03:58.774 "r_mbytes_per_sec": 0, 00:03:58.774 "w_mbytes_per_sec": 0 00:03:58.774 }, 00:03:58.774 "claimed": false, 00:03:58.774 "zoned": false, 00:03:58.774 "supported_io_types": { 00:03:58.774 "read": true, 00:03:58.774 "write": true, 00:03:58.774 "unmap": true, 00:03:58.774 "flush": true, 00:03:58.774 "reset": true, 00:03:58.774 "nvme_admin": false, 00:03:58.774 "nvme_io": false, 00:03:58.774 "nvme_io_md": false, 00:03:58.774 "write_zeroes": true, 00:03:58.774 "zcopy": true, 00:03:58.774 "get_zone_info": false, 00:03:58.774 "zone_management": false, 00:03:58.774 "zone_append": false, 00:03:58.774 "compare": false, 00:03:58.774 "compare_and_write": false, 00:03:58.774 "abort": true, 00:03:58.774 "seek_hole": false, 00:03:58.774 "seek_data": false, 00:03:58.774 "copy": true, 00:03:58.774 "nvme_iov_md": false 00:03:58.774 }, 00:03:58.774 "memory_domains": [ 00:03:58.774 { 00:03:58.774 "dma_device_id": "system", 00:03:58.774 "dma_device_type": 1 00:03:58.774 }, 00:03:58.774 { 00:03:58.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.774 "dma_device_type": 2 00:03:58.774 } 00:03:58.774 ], 00:03:58.774 "driver_specific": {} 00:03:58.774 } 00:03:58.774 ]' 00:03:58.774 18:17:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:58.774 18:17:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:58.774 18:17:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.774 18:17:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.774 18:17:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:58.774 18:17:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.774 18:17:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.774 18:17:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:58.774 18:17:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:58.774 18:17:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:58.774 00:03:58.774 real 0m0.057s 00:03:58.774 user 0m0.020s 00:03:58.774 sys 0m0.005s 00:03:58.774 18:17:51 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.774 18:17:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 ************************************ 00:03:58.774 END TEST rpc_plugins 00:03:58.774 ************************************ 00:03:58.774 18:17:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.774 18:17:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:58.774 18:17:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.774 18:17:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.774 18:17:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 ************************************ 00:03:58.774 START TEST rpc_trace_cmd_test 00:03:58.774 ************************************ 00:03:58.774 18:17:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:58.774 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:58.774 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:58.774 18:17:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.774 18:17:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.774 18:17:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.774 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:58.774 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45499", 00:03:58.774 "tpoint_group_mask": "0x8", 00:03:58.774 "iscsi_conn": { 00:03:58.774 "mask": "0x2", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "scsi": { 00:03:58.774 "mask": "0x4", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "bdev": { 00:03:58.774 "mask": "0x8", 00:03:58.774 "tpoint_mask": "0xffffffffffffffff" 00:03:58.774 }, 00:03:58.774 "nvmf_rdma": { 00:03:58.774 "mask": "0x10", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "nvmf_tcp": { 00:03:58.774 "mask": "0x20", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "blobfs": { 00:03:58.774 "mask": "0x80", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "dsa": { 00:03:58.774 "mask": "0x200", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "thread": { 00:03:58.774 "mask": "0x400", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "nvme_pcie": { 00:03:58.774 "mask": "0x800", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "iaa": { 00:03:58.774 "mask": "0x1000", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "nvme_tcp": { 00:03:58.774 "mask": "0x2000", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "bdev_nvme": { 00:03:58.774 "mask": "0x4000", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 }, 00:03:58.774 "sock": { 00:03:58.774 "mask": "0x8000", 00:03:58.774 "tpoint_mask": "0x0" 00:03:58.774 } 00:03:58.774 }' 00:03:58.774 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:58.775 00:03:58.775 real 0m0.040s 00:03:58.775 user 0m0.013s 00:03:58.775 sys 0m0.020s 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.775 ************************************ 00:03:58.775 END TEST rpc_trace_cmd_test 00:03:58.775 18:17:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.775 ************************************ 00:03:58.775 18:17:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.775 18:17:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:58.775 18:17:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:58.775 18:17:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:58.775 18:17:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.775 18:17:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.775 18:17:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.775 ************************************ 00:03:58.775 START TEST rpc_daemon_integrity 00:03:58.775 ************************************ 00:03:58.775 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:58.775 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.775 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.775 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.033 { 00:03:59.033 "name": "Malloc2", 00:03:59.033 "aliases": [ 00:03:59.033 "8c070c58-42d6-11ef-9ade-d5fc5159efa5" 00:03:59.033 ], 00:03:59.033 "product_name": "Malloc disk", 00:03:59.033 "block_size": 512, 00:03:59.033 "num_blocks": 16384, 00:03:59.033 "uuid": "8c070c58-42d6-11ef-9ade-d5fc5159efa5", 00:03:59.033 "assigned_rate_limits": { 00:03:59.033 "rw_ios_per_sec": 0, 00:03:59.033 "rw_mbytes_per_sec": 0, 00:03:59.033 "r_mbytes_per_sec": 0, 00:03:59.033 "w_mbytes_per_sec": 0 00:03:59.033 }, 00:03:59.033 "claimed": false, 00:03:59.033 "zoned": false, 00:03:59.033 "supported_io_types": { 00:03:59.033 "read": true, 00:03:59.033 "write": true, 00:03:59.033 "unmap": true, 00:03:59.033 "flush": true, 00:03:59.033 "reset": true, 00:03:59.033 "nvme_admin": false, 00:03:59.033 "nvme_io": false, 00:03:59.033 "nvme_io_md": false, 00:03:59.033 "write_zeroes": true, 00:03:59.033 "zcopy": true, 00:03:59.033 "get_zone_info": false, 00:03:59.033 "zone_management": false, 00:03:59.033 "zone_append": false, 00:03:59.033 "compare": false, 00:03:59.033 "compare_and_write": false, 00:03:59.033 "abort": true, 00:03:59.033 "seek_hole": false, 00:03:59.033 "seek_data": false, 00:03:59.033 "copy": true, 00:03:59.033 "nvme_iov_md": false 00:03:59.033 }, 00:03:59.033 "memory_domains": [ 00:03:59.033 { 00:03:59.033 "dma_device_id": "system", 00:03:59.033 "dma_device_type": 1 00:03:59.033 }, 00:03:59.033 { 00:03:59.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.033 "dma_device_type": 2 00:03:59.033 } 00:03:59.033 ], 00:03:59.033 "driver_specific": {} 00:03:59.033 } 00:03:59.033 ]' 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.033 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.033 [2024-07-15 18:17:51.184134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:59.033 [2024-07-15 18:17:51.184208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.033 [2024-07-15 18:17:51.184245] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b7487037a00 00:03:59.034 [2024-07-15 18:17:51.184259] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.034 [2024-07-15 18:17:51.185044] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.034 [2024-07-15 18:17:51.185077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.034 Passthru0 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.034 { 00:03:59.034 "name": "Malloc2", 00:03:59.034 "aliases": [ 00:03:59.034 "8c070c58-42d6-11ef-9ade-d5fc5159efa5" 00:03:59.034 ], 00:03:59.034 "product_name": "Malloc disk", 00:03:59.034 "block_size": 512, 00:03:59.034 "num_blocks": 16384, 00:03:59.034 "uuid": "8c070c58-42d6-11ef-9ade-d5fc5159efa5", 00:03:59.034 "assigned_rate_limits": { 00:03:59.034 "rw_ios_per_sec": 0, 00:03:59.034 "rw_mbytes_per_sec": 0, 00:03:59.034 "r_mbytes_per_sec": 0, 00:03:59.034 "w_mbytes_per_sec": 0 00:03:59.034 }, 00:03:59.034 "claimed": true, 00:03:59.034 "claim_type": "exclusive_write", 00:03:59.034 "zoned": false, 00:03:59.034 "supported_io_types": { 00:03:59.034 "read": true, 00:03:59.034 "write": true, 00:03:59.034 "unmap": true, 00:03:59.034 "flush": true, 00:03:59.034 "reset": true, 00:03:59.034 "nvme_admin": false, 00:03:59.034 "nvme_io": false, 00:03:59.034 "nvme_io_md": false, 00:03:59.034 "write_zeroes": true, 00:03:59.034 "zcopy": true, 00:03:59.034 "get_zone_info": false, 00:03:59.034 "zone_management": false, 00:03:59.034 "zone_append": false, 00:03:59.034 "compare": false, 00:03:59.034 "compare_and_write": false, 00:03:59.034 "abort": true, 00:03:59.034 "seek_hole": false, 00:03:59.034 "seek_data": false, 00:03:59.034 "copy": true, 00:03:59.034 "nvme_iov_md": false 00:03:59.034 }, 00:03:59.034 "memory_domains": [ 00:03:59.034 { 00:03:59.034 "dma_device_id": "system", 00:03:59.034 "dma_device_type": 1 00:03:59.034 }, 00:03:59.034 { 00:03:59.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.034 "dma_device_type": 2 00:03:59.034 } 00:03:59.034 ], 00:03:59.034 "driver_specific": {} 00:03:59.034 }, 00:03:59.034 { 00:03:59.034 "name": "Passthru0", 00:03:59.034 "aliases": [ 00:03:59.034 "9c78fece-a219-7953-80fe-f46f987b273c" 00:03:59.034 ], 00:03:59.034 "product_name": "passthru", 00:03:59.034 "block_size": 512, 00:03:59.034 "num_blocks": 16384, 00:03:59.034 "uuid": "9c78fece-a219-7953-80fe-f46f987b273c", 00:03:59.034 "assigned_rate_limits": { 00:03:59.034 "rw_ios_per_sec": 0, 00:03:59.034 "rw_mbytes_per_sec": 0, 00:03:59.034 "r_mbytes_per_sec": 0, 00:03:59.034 "w_mbytes_per_sec": 0 00:03:59.034 }, 00:03:59.034 "claimed": false, 00:03:59.034 "zoned": false, 00:03:59.034 "supported_io_types": { 00:03:59.034 "read": true, 00:03:59.034 "write": true, 00:03:59.034 "unmap": true, 00:03:59.034 "flush": true, 00:03:59.034 "reset": true, 00:03:59.034 "nvme_admin": false, 00:03:59.034 "nvme_io": false, 00:03:59.034 "nvme_io_md": false, 00:03:59.034 "write_zeroes": true, 00:03:59.034 "zcopy": true, 00:03:59.034 "get_zone_info": false, 00:03:59.034 "zone_management": false, 00:03:59.034 "zone_append": false, 00:03:59.034 "compare": false, 00:03:59.034 "compare_and_write": false, 00:03:59.034 "abort": true, 00:03:59.034 "seek_hole": false, 00:03:59.034 "seek_data": false, 00:03:59.034 "copy": true, 00:03:59.034 "nvme_iov_md": false 00:03:59.034 }, 00:03:59.034 "memory_domains": [ 00:03:59.034 { 00:03:59.034 "dma_device_id": "system", 00:03:59.034 "dma_device_type": 1 00:03:59.034 }, 00:03:59.034 { 00:03:59.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.034 "dma_device_type": 2 00:03:59.034 } 00:03:59.034 ], 00:03:59.034 "driver_specific": { 00:03:59.034 "passthru": { 00:03:59.034 "name": "Passthru0", 00:03:59.034 "base_bdev_name": "Malloc2" 00:03:59.034 } 00:03:59.034 } 00:03:59.034 } 00:03:59.034 ]' 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.034 00:03:59.034 real 0m0.113s 00:03:59.034 user 0m0.020s 00:03:59.034 sys 0m0.032s 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.034 18:17:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.034 ************************************ 00:03:59.034 END TEST rpc_daemon_integrity 00:03:59.034 ************************************ 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:59.034 18:17:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:59.034 18:17:51 rpc -- rpc/rpc.sh@84 -- # killprocess 45499 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@948 -- # '[' -z 45499 ']' 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@952 -- # kill -0 45499 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@953 -- # uname 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45499 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@956 -- # tail -1 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:03:59.034 killing process with pid 45499 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45499' 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@967 -- # kill 45499 00:03:59.034 18:17:51 rpc -- common/autotest_common.sh@972 -- # wait 45499 00:03:59.293 00:03:59.293 real 0m2.099s 00:03:59.293 user 0m2.076s 00:03:59.293 sys 0m0.991s 00:03:59.293 18:17:51 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.293 18:17:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.293 ************************************ 00:03:59.293 END TEST rpc 00:03:59.293 ************************************ 00:03:59.293 18:17:51 -- common/autotest_common.sh@1142 -- # return 0 00:03:59.293 18:17:51 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:59.293 18:17:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.293 18:17:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.293 18:17:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.293 ************************************ 00:03:59.293 START TEST skip_rpc 00:03:59.293 ************************************ 00:03:59.293 18:17:51 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:59.551 * Looking for test storage... 00:03:59.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:59.551 18:17:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:59.551 18:17:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:59.551 18:17:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:59.551 18:17:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.551 18:17:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.551 18:17:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.551 ************************************ 00:03:59.551 START TEST skip_rpc 00:03:59.551 ************************************ 00:03:59.551 18:17:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:59.551 18:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=45675 00:03:59.551 18:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:59.551 18:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.551 18:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:59.551 [2024-07-15 18:17:51.791761] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:03:59.551 [2024-07-15 18:17:51.791966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:00.119 EAL: TSC is not safe to use in SMP mode 00:04:00.119 EAL: TSC is not invariant 00:04:00.119 [2024-07-15 18:17:52.387820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.377 [2024-07-15 18:17:52.498771] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:00.377 [2024-07-15 18:17:52.500930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 45675 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 45675 ']' 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 45675 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45675 00:04:04.560 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:04.817 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:04.817 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:04.817 killing process with pid 45675 00:04:04.817 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45675' 00:04:04.817 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 45675 00:04:04.817 18:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 45675 00:04:05.076 00:04:05.076 real 0m5.438s 00:04:05.076 user 0m4.812s 00:04:05.076 sys 0m0.642s 00:04:05.076 ************************************ 00:04:05.076 END TEST skip_rpc 00:04:05.076 ************************************ 00:04:05.076 18:17:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.076 18:17:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.076 18:17:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:05.076 18:17:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:05.076 18:17:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.076 18:17:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.076 18:17:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.076 ************************************ 00:04:05.076 START TEST skip_rpc_with_json 00:04:05.076 ************************************ 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=45720 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 45720 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 45720 ']' 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.076 18:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.076 [2024-07-15 18:17:57.271845] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:05.076 [2024-07-15 18:17:57.271999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:05.643 EAL: TSC is not safe to use in SMP mode 00:04:05.643 EAL: TSC is not invariant 00:04:05.643 [2024-07-15 18:17:57.852786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.643 [2024-07-15 18:17:57.980067] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:05.643 [2024-07-15 18:17:57.982239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.263 [2024-07-15 18:17:58.312120] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:06.263 request: 00:04:06.263 { 00:04:06.263 "trtype": "tcp", 00:04:06.263 "method": "nvmf_get_transports", 00:04:06.263 "req_id": 1 00:04:06.263 } 00:04:06.263 Got JSON-RPC error response 00:04:06.263 response: 00:04:06.263 { 00:04:06.263 "code": -19, 00:04:06.263 "message": "Operation not supported by device" 00:04:06.263 } 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.263 [2024-07-15 18:17:58.324193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.263 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.264 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.264 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.264 { 00:04:06.264 "subsystems": [ 00:04:06.264 { 00:04:06.264 "subsystem": "vmd", 00:04:06.264 "config": [] 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "subsystem": "iobuf", 00:04:06.264 "config": [ 00:04:06.264 { 00:04:06.264 "method": "iobuf_set_options", 00:04:06.264 "params": { 00:04:06.264 "small_pool_count": 8192, 00:04:06.264 "large_pool_count": 1024, 00:04:06.264 "small_bufsize": 8192, 00:04:06.264 "large_bufsize": 135168 00:04:06.264 } 00:04:06.264 } 00:04:06.264 ] 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "subsystem": "scheduler", 00:04:06.264 "config": [ 00:04:06.264 { 00:04:06.264 "method": "framework_set_scheduler", 00:04:06.264 "params": { 00:04:06.264 "name": "static" 00:04:06.264 } 00:04:06.264 } 00:04:06.264 ] 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "subsystem": "sock", 00:04:06.264 "config": [ 00:04:06.264 { 00:04:06.264 "method": "sock_set_default_impl", 00:04:06.264 "params": { 00:04:06.264 "impl_name": "posix" 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "sock_impl_set_options", 00:04:06.264 "params": { 00:04:06.264 "impl_name": "ssl", 00:04:06.264 "recv_buf_size": 4096, 00:04:06.264 "send_buf_size": 4096, 00:04:06.264 "enable_recv_pipe": true, 00:04:06.264 "enable_quickack": false, 00:04:06.264 "enable_placement_id": 0, 00:04:06.264 "enable_zerocopy_send_server": true, 00:04:06.264 "enable_zerocopy_send_client": false, 00:04:06.264 "zerocopy_threshold": 0, 00:04:06.264 "tls_version": 0, 00:04:06.264 "enable_ktls": false 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "sock_impl_set_options", 00:04:06.264 "params": { 00:04:06.264 "impl_name": "posix", 00:04:06.264 "recv_buf_size": 2097152, 00:04:06.264 "send_buf_size": 2097152, 00:04:06.264 "enable_recv_pipe": true, 00:04:06.264 "enable_quickack": false, 00:04:06.264 "enable_placement_id": 0, 00:04:06.264 "enable_zerocopy_send_server": true, 00:04:06.264 "enable_zerocopy_send_client": false, 00:04:06.264 "zerocopy_threshold": 0, 00:04:06.264 "tls_version": 0, 00:04:06.264 "enable_ktls": false 00:04:06.264 } 00:04:06.264 } 00:04:06.264 ] 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "subsystem": "keyring", 00:04:06.264 "config": [] 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "subsystem": "accel", 00:04:06.264 "config": [ 00:04:06.264 { 00:04:06.264 "method": "accel_set_options", 00:04:06.264 "params": { 00:04:06.264 "small_cache_size": 128, 00:04:06.264 "large_cache_size": 16, 00:04:06.264 "task_count": 2048, 00:04:06.264 "sequence_count": 2048, 00:04:06.264 "buf_count": 2048 00:04:06.264 } 00:04:06.264 } 00:04:06.264 ] 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "subsystem": "bdev", 00:04:06.264 "config": [ 00:04:06.264 { 00:04:06.264 "method": "bdev_set_options", 00:04:06.264 "params": { 00:04:06.264 "bdev_io_pool_size": 65535, 00:04:06.264 "bdev_io_cache_size": 256, 00:04:06.264 "bdev_auto_examine": true, 00:04:06.264 "iobuf_small_cache_size": 128, 00:04:06.264 "iobuf_large_cache_size": 16 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "bdev_raid_set_options", 00:04:06.264 "params": { 00:04:06.264 "process_window_size_kb": 1024 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "bdev_nvme_set_options", 00:04:06.264 "params": { 00:04:06.264 "action_on_timeout": "none", 00:04:06.264 "timeout_us": 0, 00:04:06.264 "timeout_admin_us": 0, 00:04:06.264 "keep_alive_timeout_ms": 10000, 00:04:06.264 "arbitration_burst": 0, 00:04:06.264 "low_priority_weight": 0, 00:04:06.264 "medium_priority_weight": 0, 00:04:06.264 "high_priority_weight": 0, 00:04:06.264 "nvme_adminq_poll_period_us": 10000, 00:04:06.264 "nvme_ioq_poll_period_us": 0, 00:04:06.264 "io_queue_requests": 0, 00:04:06.264 "delay_cmd_submit": true, 00:04:06.264 "transport_retry_count": 4, 00:04:06.264 "bdev_retry_count": 3, 00:04:06.264 "transport_ack_timeout": 0, 00:04:06.264 "ctrlr_loss_timeout_sec": 0, 00:04:06.264 "reconnect_delay_sec": 0, 00:04:06.264 "fast_io_fail_timeout_sec": 0, 00:04:06.264 "disable_auto_failback": false, 00:04:06.264 "generate_uuids": false, 00:04:06.264 "transport_tos": 0, 00:04:06.264 "nvme_error_stat": false, 00:04:06.264 "rdma_srq_size": 0, 00:04:06.264 "io_path_stat": false, 00:04:06.264 "allow_accel_sequence": false, 00:04:06.264 "rdma_max_cq_size": 0, 00:04:06.264 "rdma_cm_event_timeout_ms": 0, 00:04:06.264 "dhchap_digests": [ 00:04:06.264 "sha256", 00:04:06.264 "sha384", 00:04:06.264 "sha512" 00:04:06.264 ], 00:04:06.264 "dhchap_dhgroups": [ 00:04:06.264 "null", 00:04:06.264 "ffdhe2048", 00:04:06.264 "ffdhe3072", 00:04:06.264 "ffdhe4096", 00:04:06.264 "ffdhe6144", 00:04:06.264 "ffdhe8192" 00:04:06.264 ] 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "bdev_nvme_set_hotplug", 00:04:06.264 "params": { 00:04:06.264 "period_us": 100000, 00:04:06.264 "enable": false 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "bdev_wait_for_examine" 00:04:06.264 } 00:04:06.264 ] 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "subsystem": "scsi", 00:04:06.264 "config": null 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "subsystem": "nvmf", 00:04:06.264 "config": [ 00:04:06.264 { 00:04:06.264 "method": "nvmf_set_config", 00:04:06.264 "params": { 00:04:06.264 "discovery_filter": "match_any", 00:04:06.264 "admin_cmd_passthru": { 00:04:06.264 "identify_ctrlr": false 00:04:06.264 } 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "nvmf_set_max_subsystems", 00:04:06.264 "params": { 00:04:06.264 "max_subsystems": 1024 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "nvmf_set_crdt", 00:04:06.264 "params": { 00:04:06.264 "crdt1": 0, 00:04:06.264 "crdt2": 0, 00:04:06.264 "crdt3": 0 00:04:06.264 } 00:04:06.264 }, 00:04:06.264 { 00:04:06.264 "method": "nvmf_create_transport", 00:04:06.264 "params": { 00:04:06.264 "trtype": "TCP", 00:04:06.265 "max_queue_depth": 128, 00:04:06.265 "max_io_qpairs_per_ctrlr": 127, 00:04:06.265 "in_capsule_data_size": 4096, 00:04:06.265 "max_io_size": 131072, 00:04:06.265 "io_unit_size": 131072, 00:04:06.265 "max_aq_depth": 128, 00:04:06.265 "num_shared_buffers": 511, 00:04:06.265 "buf_cache_size": 4294967295, 00:04:06.265 "dif_insert_or_strip": false, 00:04:06.265 "zcopy": false, 00:04:06.265 "c2h_success": true, 00:04:06.265 "sock_priority": 0, 00:04:06.265 "abort_timeout_sec": 1, 00:04:06.265 "ack_timeout": 0, 00:04:06.265 "data_wr_pool_size": 0 00:04:06.265 } 00:04:06.265 } 00:04:06.265 ] 00:04:06.265 }, 00:04:06.265 { 00:04:06.265 "subsystem": "iscsi", 00:04:06.265 "config": [ 00:04:06.265 { 00:04:06.265 "method": "iscsi_set_options", 00:04:06.265 "params": { 00:04:06.265 "node_base": "iqn.2016-06.io.spdk", 00:04:06.265 "max_sessions": 128, 00:04:06.265 "max_connections_per_session": 2, 00:04:06.265 "max_queue_depth": 64, 00:04:06.265 "default_time2wait": 2, 00:04:06.265 "default_time2retain": 20, 00:04:06.265 "first_burst_length": 8192, 00:04:06.265 "immediate_data": true, 00:04:06.265 "allow_duplicated_isid": false, 00:04:06.265 "error_recovery_level": 0, 00:04:06.265 "nop_timeout": 60, 00:04:06.265 "nop_in_interval": 30, 00:04:06.265 "disable_chap": false, 00:04:06.265 "require_chap": false, 00:04:06.265 "mutual_chap": false, 00:04:06.265 "chap_group": 0, 00:04:06.265 "max_large_datain_per_connection": 64, 00:04:06.265 "max_r2t_per_connection": 4, 00:04:06.265 "pdu_pool_size": 36864, 00:04:06.265 "immediate_data_pool_size": 16384, 00:04:06.265 "data_out_pool_size": 2048 00:04:06.265 } 00:04:06.265 } 00:04:06.265 ] 00:04:06.265 } 00:04:06.265 ] 00:04:06.265 } 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 45720 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45720 ']' 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45720 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45720 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:06.265 killing process with pid 45720 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45720' 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45720 00:04:06.265 18:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45720 00:04:06.538 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=45734 00:04:06.538 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.538 18:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 45734 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45734 ']' 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45734 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45734 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:11.798 killing process with pid 45734 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45734' 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45734 00:04:11.798 18:18:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45734 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.798 00:04:11.798 real 0m6.816s 00:04:11.798 user 0m6.038s 00:04:11.798 sys 0m1.328s 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.798 ************************************ 00:04:11.798 END TEST skip_rpc_with_json 00:04:11.798 ************************************ 00:04:11.798 18:18:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.798 18:18:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:11.798 18:18:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.798 18:18:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.798 18:18:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.798 ************************************ 00:04:11.798 START TEST skip_rpc_with_delay 00:04:11.798 ************************************ 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.798 [2024-07-15 18:18:04.138844] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:11.798 [2024-07-15 18:18:04.139195] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:11.798 00:04:11.798 real 0m0.013s 00:04:11.798 user 0m0.001s 00:04:11.798 sys 0m0.016s 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.798 18:18:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:11.798 ************************************ 00:04:11.798 END TEST skip_rpc_with_delay 00:04:11.798 ************************************ 00:04:12.057 18:18:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:12.057 18:18:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:12.057 18:18:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:04:12.057 18:18:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.057 00:04:12.057 real 0m12.545s 00:04:12.057 user 0m10.974s 00:04:12.057 sys 0m2.169s 00:04:12.057 18:18:04 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.057 ************************************ 00:04:12.057 END TEST skip_rpc 00:04:12.057 ************************************ 00:04:12.057 18:18:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.057 18:18:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:12.057 18:18:04 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:12.057 18:18:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.057 18:18:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.057 18:18:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.057 ************************************ 00:04:12.057 START TEST rpc_client 00:04:12.057 ************************************ 00:04:12.057 18:18:04 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:12.057 * Looking for test storage... 00:04:12.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:12.057 18:18:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:12.057 OK 00:04:12.057 18:18:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:12.057 00:04:12.057 real 0m0.157s 00:04:12.057 user 0m0.119s 00:04:12.057 sys 0m0.116s 00:04:12.057 18:18:04 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.057 18:18:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:12.057 ************************************ 00:04:12.057 END TEST rpc_client 00:04:12.057 ************************************ 00:04:12.317 18:18:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:12.317 18:18:04 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:12.317 18:18:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.317 18:18:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.317 18:18:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.317 ************************************ 00:04:12.317 START TEST json_config 00:04:12.317 ************************************ 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:12.317 18:18:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:12.317 18:18:04 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:12.317 18:18:04 json_config -- nvmf/common.sh@7 -- # return 0 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:12.317 INFO: JSON configuration test init 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.317 18:18:04 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:12.317 18:18:04 json_config -- json_config/common.sh@9 -- # local app=target 00:04:12.317 18:18:04 json_config -- json_config/common.sh@10 -- # shift 00:04:12.317 18:18:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.317 18:18:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.317 18:18:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.317 18:18:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.317 18:18:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.317 18:18:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=45893 00:04:12.317 Waiting for target to run... 00:04:12.317 18:18:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.317 18:18:04 json_config -- json_config/common.sh@25 -- # waitforlisten 45893 /var/tmp/spdk_tgt.sock 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@829 -- # '[' -z 45893 ']' 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:12.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:12.317 18:18:04 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:12.317 18:18:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.317 [2024-07-15 18:18:04.595848] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:12.317 [2024-07-15 18:18:04.596135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:12.576 EAL: TSC is not safe to use in SMP mode 00:04:12.576 EAL: TSC is not invariant 00:04:12.576 [2024-07-15 18:18:04.923628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.833 [2024-07-15 18:18:05.067395] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:12.833 [2024-07-15 18:18:05.069583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.432 18:18:05 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:13.432 18:18:05 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:13.432 00:04:13.432 18:18:05 json_config -- json_config/common.sh@26 -- # echo '' 00:04:13.432 18:18:05 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:13.432 18:18:05 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:13.432 18:18:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.432 18:18:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.432 18:18:05 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:13.432 18:18:05 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:13.432 18:18:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:13.432 18:18:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.432 18:18:05 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:13.432 18:18:05 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:13.432 18:18:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:13.999 [2024-07-15 18:18:06.078948] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:13.999 18:18:06 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:13.999 18:18:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:13.999 18:18:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.999 18:18:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.999 18:18:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:13.999 18:18:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:13.999 18:18:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:13.999 18:18:06 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:13.999 18:18:06 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:13.999 18:18:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:14.257 18:18:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.257 18:18:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:04:14.257 18:18:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.257 18:18:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:14.257 18:18:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:14.257 18:18:06 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:14.516 18:18:06 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:14.516 18:18:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:14.516 18:18:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:14.516 18:18:06 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:04:14.516 18:18:06 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:04:14.516 18:18:06 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:04:14.516 18:18:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:04:14.775 Nvme0n1p0 Nvme0n1p1 00:04:14.775 18:18:07 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:04:14.775 18:18:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:04:15.034 [2024-07-15 18:18:07.341243] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:15.034 [2024-07-15 18:18:07.341308] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:15.034 00:04:15.034 18:18:07 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:04:15.034 18:18:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:04:15.305 Malloc3 00:04:15.305 18:18:07 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:15.305 18:18:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:15.578 [2024-07-15 18:18:07.825265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:15.578 [2024-07-15 18:18:07.825326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.578 [2024-07-15 18:18:07.825355] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2003e5038180 00:04:15.578 [2024-07-15 18:18:07.825364] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.578 [2024-07-15 18:18:07.826082] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.578 [2024-07-15 18:18:07.826106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:15.578 PTBdevFromMalloc3 00:04:15.578 18:18:07 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:04:15.578 18:18:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:04:15.837 Null0 00:04:15.837 18:18:08 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:04:15.837 18:18:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:04:16.095 Malloc0 00:04:16.095 18:18:08 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:04:16.095 18:18:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:04:16.352 Malloc1 00:04:16.352 18:18:08 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:04:16.352 18:18:08 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:04:16.610 102400+0 records in 00:04:16.610 102400+0 records out 00:04:16.610 104857600 bytes transferred in 0.285580 secs (367173723 bytes/sec) 00:04:16.610 18:18:08 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:04:16.610 18:18:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:04:16.868 aio_disk 00:04:16.868 18:18:09 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:04:16.868 18:18:09 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:16.868 18:18:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:17.434 96f6ea98-42d6-11ef-9ade-d5fc5159efa5 00:04:17.434 18:18:09 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:04:17.435 18:18:09 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:04:17.435 18:18:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:04:17.435 18:18:09 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:04:17.435 18:18:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:04:17.693 18:18:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:17.693 18:18:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:17.952 18:18:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:17.952 18:18:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:971d5ec8-42d6-11ef-9ade-d5fc5159efa5 bdev_register:9741fde2-42d6-11ef-9ade-d5fc5159efa5 bdev_register:976600fd-42d6-11ef-9ade-d5fc5159efa5 bdev_register:9791f3ad-42d6-11ef-9ade-d5fc5159efa5 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:971d5ec8-42d6-11ef-9ade-d5fc5159efa5 bdev_register:9741fde2-42d6-11ef-9ade-d5fc5159efa5 bdev_register:976600fd-42d6-11ef-9ade-d5fc5159efa5 bdev_register:9791f3ad-42d6-11ef-9ade-d5fc5159efa5 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@71 -- # sort 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@72 -- # sort 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:18.210 18:18:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:18.210 18:18:10 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:971d5ec8-42d6-11ef-9ade-d5fc5159efa5 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:9741fde2-42d6-11ef-9ade-d5fc5159efa5 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:976600fd-42d6-11ef-9ade-d5fc5159efa5 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:9791f3ad-42d6-11ef-9ade-d5fc5159efa5 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:971d5ec8-42d6-11ef-9ade-d5fc5159efa5 bdev_register:9741fde2-42d6-11ef-9ade-d5fc5159efa5 bdev_register:976600fd-42d6-11ef-9ade-d5fc5159efa5 bdev_register:9791f3ad-42d6-11ef-9ade-d5fc5159efa5 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\7\1\d\5\e\c\8\-\4\2\d\6\-\1\1\e\f\-\9\a\d\e\-\d\5\f\c\5\1\5\9\e\f\a\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\7\4\1\f\d\e\2\-\4\2\d\6\-\1\1\e\f\-\9\a\d\e\-\d\5\f\c\5\1\5\9\e\f\a\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\7\6\6\0\0\f\d\-\4\2\d\6\-\1\1\e\f\-\9\a\d\e\-\d\5\f\c\5\1\5\9\e\f\a\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\7\9\1\f\3\a\d\-\4\2\d\6\-\1\1\e\f\-\9\a\d\e\-\d\5\f\c\5\1\5\9\e\f\a\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@86 -- # cat 00:04:18.469 18:18:10 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:971d5ec8-42d6-11ef-9ade-d5fc5159efa5 bdev_register:9741fde2-42d6-11ef-9ade-d5fc5159efa5 bdev_register:976600fd-42d6-11ef-9ade-d5fc5159efa5 bdev_register:9791f3ad-42d6-11ef-9ade-d5fc5159efa5 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:04:18.727 Expected events matched: 00:04:18.727 bdev_register:971d5ec8-42d6-11ef-9ade-d5fc5159efa5 00:04:18.727 bdev_register:9741fde2-42d6-11ef-9ade-d5fc5159efa5 00:04:18.727 bdev_register:976600fd-42d6-11ef-9ade-d5fc5159efa5 00:04:18.727 bdev_register:9791f3ad-42d6-11ef-9ade-d5fc5159efa5 00:04:18.727 bdev_register:Malloc0 00:04:18.727 bdev_register:Malloc0p0 00:04:18.727 bdev_register:Malloc0p1 00:04:18.727 bdev_register:Malloc0p2 00:04:18.727 bdev_register:Malloc1 00:04:18.727 bdev_register:Malloc3 00:04:18.727 bdev_register:Null0 00:04:18.727 bdev_register:Nvme0n1 00:04:18.727 bdev_register:Nvme0n1p0 00:04:18.727 bdev_register:Nvme0n1p1 00:04:18.727 bdev_register:PTBdevFromMalloc3 00:04:18.727 bdev_register:aio_disk 00:04:18.727 18:18:10 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:04:18.727 18:18:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.727 18:18:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.727 18:18:10 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:18.727 18:18:10 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:18.727 18:18:10 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:18.727 18:18:10 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:18.727 18:18:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.727 18:18:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.727 18:18:10 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:18.727 18:18:10 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.727 18:18:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.987 MallocBdevForConfigChangeCheck 00:04:18.987 18:18:11 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:18.987 18:18:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.987 18:18:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.987 18:18:11 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:18.987 18:18:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.246 INFO: shutting down applications... 00:04:19.246 18:18:11 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:19.246 18:18:11 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:19.246 18:18:11 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:19.246 18:18:11 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:19.246 18:18:11 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:19.503 [2024-07-15 18:18:11.649476] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:04:19.503 Calling clear_iscsi_subsystem 00:04:19.503 Calling clear_nvmf_subsystem 00:04:19.503 Calling clear_bdev_subsystem 00:04:19.503 18:18:11 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:19.503 18:18:11 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:19.503 18:18:11 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:19.503 18:18:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.503 18:18:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:19.503 18:18:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:20.068 18:18:12 json_config -- json_config/json_config.sh@345 -- # break 00:04:20.069 18:18:12 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:20.069 18:18:12 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:20.069 18:18:12 json_config -- json_config/common.sh@31 -- # local app=target 00:04:20.069 18:18:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:20.069 18:18:12 json_config -- json_config/common.sh@35 -- # [[ -n 45893 ]] 00:04:20.069 18:18:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 45893 00:04:20.069 18:18:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:20.069 18:18:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.069 18:18:12 json_config -- json_config/common.sh@41 -- # kill -0 45893 00:04:20.069 18:18:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.327 18:18:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.327 18:18:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.327 18:18:12 json_config -- json_config/common.sh@41 -- # kill -0 45893 00:04:20.327 18:18:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.327 18:18:12 json_config -- json_config/common.sh@43 -- # break 00:04:20.327 18:18:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.327 18:18:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.327 SPDK target shutdown done 00:04:20.327 INFO: relaunching applications... 00:04:20.327 18:18:12 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:20.327 18:18:12 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.327 18:18:12 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.327 18:18:12 json_config -- json_config/common.sh@10 -- # shift 00:04:20.327 18:18:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.327 18:18:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.327 18:18:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.327 18:18:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.327 18:18:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.327 18:18:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46081 00:04:20.327 18:18:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.327 Waiting for target to run... 00:04:20.327 18:18:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.327 18:18:12 json_config -- json_config/common.sh@25 -- # waitforlisten 46081 /var/tmp/spdk_tgt.sock 00:04:20.327 18:18:12 json_config -- common/autotest_common.sh@829 -- # '[' -z 46081 ']' 00:04:20.327 18:18:12 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.327 18:18:12 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.327 18:18:12 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.327 18:18:12 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.327 18:18:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 [2024-07-15 18:18:12.687812] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:20.585 [2024-07-15 18:18:12.688042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:20.843 EAL: TSC is not safe to use in SMP mode 00:04:20.843 EAL: TSC is not invariant 00:04:20.843 [2024-07-15 18:18:13.022223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.843 [2024-07-15 18:18:13.135435] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:20.843 [2024-07-15 18:18:13.137879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.102 [2024-07-15 18:18:13.288166] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:21.102 [2024-07-15 18:18:13.288226] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:21.102 [2024-07-15 18:18:13.296156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:21.102 [2024-07-15 18:18:13.296196] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:21.102 [2024-07-15 18:18:13.304174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:21.102 [2024-07-15 18:18:13.304216] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:21.102 [2024-07-15 18:18:13.304226] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:21.102 [2024-07-15 18:18:13.312173] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:21.102 [2024-07-15 18:18:13.381033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:21.102 [2024-07-15 18:18:13.381100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.102 [2024-07-15 18:18:13.381113] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x203b80c37780 00:04:21.102 [2024-07-15 18:18:13.381122] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.102 [2024-07-15 18:18:13.381232] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.102 [2024-07-15 18:18:13.381243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:21.667 18:18:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.667 18:18:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:21.667 00:04:21.667 18:18:13 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.667 18:18:13 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:21.667 INFO: Checking if target configuration is the same... 00:04:21.667 18:18:13 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:21.668 18:18:13 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.hX7XPY /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.668 + '[' 2 -ne 2 ']' 00:04:21.668 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:21.668 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:21.668 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:21.668 +++ basename /tmp//sh-np.hX7XPY 00:04:21.668 ++ mktemp /tmp/sh-np.hX7XPY.XXX 00:04:21.668 + tmp_file_1=/tmp/sh-np.hX7XPY.wJ4 00:04:21.668 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.668 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.668 + tmp_file_2=/tmp/spdk_tgt_config.json.7d1 00:04:21.668 + ret=0 00:04:21.668 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.668 18:18:13 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:21.668 18:18:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.926 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.926 + diff -u /tmp/sh-np.hX7XPY.wJ4 /tmp/spdk_tgt_config.json.7d1 00:04:21.926 INFO: JSON config files are the same 00:04:21.926 + echo 'INFO: JSON config files are the same' 00:04:21.926 + rm /tmp/sh-np.hX7XPY.wJ4 /tmp/spdk_tgt_config.json.7d1 00:04:21.926 + exit 0 00:04:21.926 18:18:14 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:21.926 INFO: changing configuration and checking if this can be detected... 00:04:21.926 18:18:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:21.926 18:18:14 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.926 18:18:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.184 18:18:14 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.xQ8Q06 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.184 + '[' 2 -ne 2 ']' 00:04:22.184 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:22.184 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:22.184 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:22.184 +++ basename /tmp//sh-np.xQ8Q06 00:04:22.184 ++ mktemp /tmp/sh-np.xQ8Q06.XXX 00:04:22.184 + tmp_file_1=/tmp/sh-np.xQ8Q06.NqN 00:04:22.184 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.184 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.184 + tmp_file_2=/tmp/spdk_tgt_config.json.t8X 00:04:22.184 + ret=0 00:04:22.184 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.443 18:18:14 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:22.443 18:18:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.701 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.701 + diff -u /tmp/sh-np.xQ8Q06.NqN /tmp/spdk_tgt_config.json.t8X 00:04:22.701 + ret=1 00:04:22.701 + echo '=== Start of file: /tmp/sh-np.xQ8Q06.NqN ===' 00:04:22.701 + cat /tmp/sh-np.xQ8Q06.NqN 00:04:22.701 + echo '=== End of file: /tmp/sh-np.xQ8Q06.NqN ===' 00:04:22.701 + echo '' 00:04:22.701 + echo '=== Start of file: /tmp/spdk_tgt_config.json.t8X ===' 00:04:22.701 + cat /tmp/spdk_tgt_config.json.t8X 00:04:22.701 + echo '=== End of file: /tmp/spdk_tgt_config.json.t8X ===' 00:04:22.701 + echo '' 00:04:22.701 + rm /tmp/sh-np.xQ8Q06.NqN /tmp/spdk_tgt_config.json.t8X 00:04:22.701 + exit 1 00:04:22.701 INFO: configuration change detected. 00:04:22.701 18:18:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:22.702 18:18:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.702 18:18:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@317 -- # [[ -n 46081 ]] 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:22.702 18:18:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.702 18:18:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:04:22.702 18:18:14 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:04:22.702 18:18:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:04:22.960 18:18:15 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:04:22.960 18:18:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:04:23.233 18:18:15 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:04:23.233 18:18:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:04:23.522 18:18:15 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:04:23.522 18:18:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:04:23.779 18:18:16 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:23.779 18:18:16 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:04:23.779 18:18:16 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:23.779 18:18:16 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.779 18:18:16 json_config -- json_config/json_config.sh@323 -- # killprocess 46081 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@948 -- # '[' -z 46081 ']' 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@952 -- # kill -0 46081 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@953 -- # uname 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@956 -- # ps -c -o command 46081 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@956 -- # tail -1 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:23.779 killing process with pid 46081 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46081' 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@967 -- # kill 46081 00:04:23.779 18:18:16 json_config -- common/autotest_common.sh@972 -- # wait 46081 00:04:24.038 18:18:16 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:24.296 18:18:16 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:24.296 18:18:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.296 18:18:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.296 18:18:16 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:24.296 INFO: Success 00:04:24.296 18:18:16 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:24.296 00:04:24.296 real 0m12.005s 00:04:24.296 user 0m19.241s 00:04:24.296 sys 0m1.993s 00:04:24.296 18:18:16 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.296 18:18:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.296 ************************************ 00:04:24.296 END TEST json_config 00:04:24.296 ************************************ 00:04:24.296 18:18:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.296 18:18:16 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:24.296 18:18:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.296 18:18:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.296 18:18:16 -- common/autotest_common.sh@10 -- # set +x 00:04:24.296 ************************************ 00:04:24.296 START TEST json_config_extra_key 00:04:24.296 ************************************ 00:04:24.296 18:18:16 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.296 18:18:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:24.296 18:18:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:24.296 18:18:16 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:24.296 INFO: launching applications... 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:24.296 18:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46212 00:04:24.296 Waiting for target to run... 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46212 /var/tmp/spdk_tgt.sock 00:04:24.296 18:18:16 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 46212 ']' 00:04:24.296 18:18:16 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.296 18:18:16 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.296 18:18:16 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.296 18:18:16 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.296 18:18:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:24.296 18:18:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.296 [2024-07-15 18:18:16.629609] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:24.296 [2024-07-15 18:18:16.629815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:24.863 EAL: TSC is not safe to use in SMP mode 00:04:24.863 EAL: TSC is not invariant 00:04:24.863 [2024-07-15 18:18:17.001352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.863 [2024-07-15 18:18:17.105447] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:24.863 [2024-07-15 18:18:17.108457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.428 18:18:17 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.428 18:18:17 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:25.428 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:25.428 INFO: shutting down applications... 00:04:25.428 18:18:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:25.428 18:18:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46212 ]] 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46212 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46212 00:04:25.428 18:18:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.993 18:18:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.994 18:18:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.994 18:18:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46212 00:04:25.994 18:18:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:25.994 18:18:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:25.994 SPDK target shutdown done 00:04:25.994 18:18:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:25.994 18:18:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:25.994 Success 00:04:25.994 18:18:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:25.994 00:04:25.994 real 0m1.782s 00:04:25.994 user 0m1.695s 00:04:25.994 sys 0m0.477s 00:04:25.994 18:18:18 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.994 18:18:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:25.994 ************************************ 00:04:25.994 END TEST json_config_extra_key 00:04:25.994 ************************************ 00:04:25.994 18:18:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.994 18:18:18 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:25.994 18:18:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.994 18:18:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.994 18:18:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.994 ************************************ 00:04:25.994 START TEST alias_rpc 00:04:25.994 ************************************ 00:04:25.994 18:18:18 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.251 * Looking for test storage... 00:04:26.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:26.251 18:18:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:26.251 18:18:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46270 00:04:26.251 18:18:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.251 18:18:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46270 00:04:26.251 18:18:18 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 46270 ']' 00:04:26.251 18:18:18 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.251 18:18:18 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.251 18:18:18 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.251 18:18:18 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.251 18:18:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.251 [2024-07-15 18:18:18.452832] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:26.251 [2024-07-15 18:18:18.453089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:26.816 EAL: TSC is not safe to use in SMP mode 00:04:26.816 EAL: TSC is not invariant 00:04:26.816 [2024-07-15 18:18:19.049095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.816 [2024-07-15 18:18:19.157024] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:26.816 [2024-07-15 18:18:19.159230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.383 18:18:19 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.383 18:18:19 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:27.383 18:18:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:27.642 18:18:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46270 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 46270 ']' 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 46270 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 46270 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:27.642 killing process with pid 46270 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46270' 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@967 -- # kill 46270 00:04:27.642 18:18:19 alias_rpc -- common/autotest_common.sh@972 -- # wait 46270 00:04:27.903 00:04:27.903 real 0m1.884s 00:04:27.903 user 0m2.001s 00:04:27.903 sys 0m0.803s 00:04:27.903 ************************************ 00:04:27.903 END TEST alias_rpc 00:04:27.903 ************************************ 00:04:27.903 18:18:20 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.903 18:18:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.903 18:18:20 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.903 18:18:20 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:27.903 18:18:20 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:27.903 18:18:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.903 18:18:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.903 18:18:20 -- common/autotest_common.sh@10 -- # set +x 00:04:27.903 ************************************ 00:04:27.903 START TEST spdkcli_tcp 00:04:27.903 ************************************ 00:04:27.903 18:18:20 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:28.177 * Looking for test storage... 00:04:28.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:28.177 18:18:20 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.177 18:18:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46335 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:28.177 18:18:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46335 00:04:28.177 18:18:20 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 46335 ']' 00:04:28.177 18:18:20 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.177 18:18:20 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.177 18:18:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.177 18:18:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.177 18:18:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.177 [2024-07-15 18:18:20.397098] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:28.177 [2024-07-15 18:18:20.397266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:28.773 EAL: TSC is not safe to use in SMP mode 00:04:28.773 EAL: TSC is not invariant 00:04:28.774 [2024-07-15 18:18:21.026326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.033 [2024-07-15 18:18:21.143441] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:29.033 [2024-07-15 18:18:21.143504] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:29.033 [2024-07-15 18:18:21.146759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.033 [2024-07-15 18:18:21.146747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.293 18:18:21 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.293 18:18:21 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:29.293 18:18:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46343 00:04:29.293 18:18:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:29.293 18:18:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:29.552 [ 00:04:29.552 "spdk_get_version", 00:04:29.552 "rpc_get_methods", 00:04:29.552 "env_dpdk_get_mem_stats", 00:04:29.552 "trace_get_info", 00:04:29.552 "trace_get_tpoint_group_mask", 00:04:29.552 "trace_disable_tpoint_group", 00:04:29.552 "trace_enable_tpoint_group", 00:04:29.552 "trace_clear_tpoint_mask", 00:04:29.552 "trace_set_tpoint_mask", 00:04:29.552 "notify_get_notifications", 00:04:29.552 "notify_get_types", 00:04:29.552 "accel_get_stats", 00:04:29.552 "accel_set_options", 00:04:29.552 "accel_set_driver", 00:04:29.552 "accel_crypto_key_destroy", 00:04:29.552 "accel_crypto_keys_get", 00:04:29.552 "accel_crypto_key_create", 00:04:29.552 "accel_assign_opc", 00:04:29.552 "accel_get_module_info", 00:04:29.552 "accel_get_opc_assignments", 00:04:29.552 "bdev_get_histogram", 00:04:29.552 "bdev_enable_histogram", 00:04:29.552 "bdev_set_qos_limit", 00:04:29.552 "bdev_set_qd_sampling_period", 00:04:29.552 "bdev_get_bdevs", 00:04:29.552 "bdev_reset_iostat", 00:04:29.553 "bdev_get_iostat", 00:04:29.553 "bdev_examine", 00:04:29.553 "bdev_wait_for_examine", 00:04:29.553 "bdev_set_options", 00:04:29.553 "keyring_get_keys", 00:04:29.553 "framework_get_pci_devices", 00:04:29.553 "framework_get_config", 00:04:29.553 "framework_get_subsystems", 00:04:29.553 "sock_get_default_impl", 00:04:29.553 "sock_set_default_impl", 00:04:29.553 "sock_impl_set_options", 00:04:29.553 "sock_impl_get_options", 00:04:29.553 "thread_set_cpumask", 00:04:29.553 "framework_get_governor", 00:04:29.553 "framework_get_scheduler", 00:04:29.553 "framework_set_scheduler", 00:04:29.553 "framework_get_reactors", 00:04:29.553 "thread_get_io_channels", 00:04:29.553 "thread_get_pollers", 00:04:29.553 "thread_get_stats", 00:04:29.553 "framework_monitor_context_switch", 00:04:29.553 "spdk_kill_instance", 00:04:29.553 "log_enable_timestamps", 00:04:29.553 "log_get_flags", 00:04:29.553 "log_clear_flag", 00:04:29.553 "log_set_flag", 00:04:29.553 "log_get_level", 00:04:29.553 "log_set_level", 00:04:29.553 "log_get_print_level", 00:04:29.553 "log_set_print_level", 00:04:29.553 "framework_enable_cpumask_locks", 00:04:29.553 "framework_disable_cpumask_locks", 00:04:29.553 "framework_wait_init", 00:04:29.553 "framework_start_init", 00:04:29.553 "iobuf_get_stats", 00:04:29.553 "iobuf_set_options", 00:04:29.553 "vmd_rescan", 00:04:29.553 "vmd_remove_device", 00:04:29.553 "vmd_enable", 00:04:29.553 "nvmf_stop_mdns_prr", 00:04:29.553 "nvmf_publish_mdns_prr", 00:04:29.553 "nvmf_subsystem_get_listeners", 00:04:29.553 "nvmf_subsystem_get_qpairs", 00:04:29.553 "nvmf_subsystem_get_controllers", 00:04:29.553 "nvmf_get_stats", 00:04:29.553 "nvmf_get_transports", 00:04:29.553 "nvmf_create_transport", 00:04:29.553 "nvmf_get_targets", 00:04:29.553 "nvmf_delete_target", 00:04:29.553 "nvmf_create_target", 00:04:29.553 "nvmf_subsystem_allow_any_host", 00:04:29.553 "nvmf_subsystem_remove_host", 00:04:29.553 "nvmf_subsystem_add_host", 00:04:29.553 "nvmf_ns_remove_host", 00:04:29.553 "nvmf_ns_add_host", 00:04:29.553 "nvmf_subsystem_remove_ns", 00:04:29.553 "nvmf_subsystem_add_ns", 00:04:29.553 "nvmf_subsystem_listener_set_ana_state", 00:04:29.553 "nvmf_discovery_get_referrals", 00:04:29.553 "nvmf_discovery_remove_referral", 00:04:29.553 "nvmf_discovery_add_referral", 00:04:29.553 "nvmf_subsystem_remove_listener", 00:04:29.553 "nvmf_subsystem_add_listener", 00:04:29.553 "nvmf_delete_subsystem", 00:04:29.553 "nvmf_create_subsystem", 00:04:29.553 "nvmf_get_subsystems", 00:04:29.553 "nvmf_set_crdt", 00:04:29.553 "nvmf_set_config", 00:04:29.553 "nvmf_set_max_subsystems", 00:04:29.553 "scsi_get_devices", 00:04:29.553 "iscsi_get_histogram", 00:04:29.553 "iscsi_enable_histogram", 00:04:29.553 "iscsi_set_options", 00:04:29.553 "iscsi_get_auth_groups", 00:04:29.553 "iscsi_auth_group_remove_secret", 00:04:29.553 "iscsi_auth_group_add_secret", 00:04:29.553 "iscsi_delete_auth_group", 00:04:29.553 "iscsi_create_auth_group", 00:04:29.553 "iscsi_set_discovery_auth", 00:04:29.553 "iscsi_get_options", 00:04:29.553 "iscsi_target_node_request_logout", 00:04:29.553 "iscsi_target_node_set_redirect", 00:04:29.553 "iscsi_target_node_set_auth", 00:04:29.553 "iscsi_target_node_add_lun", 00:04:29.553 "iscsi_get_stats", 00:04:29.553 "iscsi_get_connections", 00:04:29.553 "iscsi_portal_group_set_auth", 00:04:29.553 "iscsi_start_portal_group", 00:04:29.553 "iscsi_delete_portal_group", 00:04:29.553 "iscsi_create_portal_group", 00:04:29.553 "iscsi_get_portal_groups", 00:04:29.553 "iscsi_delete_target_node", 00:04:29.553 "iscsi_target_node_remove_pg_ig_maps", 00:04:29.553 "iscsi_target_node_add_pg_ig_maps", 00:04:29.553 "iscsi_create_target_node", 00:04:29.553 "iscsi_get_target_nodes", 00:04:29.553 "iscsi_delete_initiator_group", 00:04:29.553 "iscsi_initiator_group_remove_initiators", 00:04:29.553 "iscsi_initiator_group_add_initiators", 00:04:29.553 "iscsi_create_initiator_group", 00:04:29.553 "iscsi_get_initiator_groups", 00:04:29.553 "keyring_file_remove_key", 00:04:29.553 "keyring_file_add_key", 00:04:29.553 "iaa_scan_accel_module", 00:04:29.553 "dsa_scan_accel_module", 00:04:29.553 "ioat_scan_accel_module", 00:04:29.553 "accel_error_inject_error", 00:04:29.553 "bdev_aio_delete", 00:04:29.553 "bdev_aio_rescan", 00:04:29.553 "bdev_aio_create", 00:04:29.553 "blobfs_create", 00:04:29.553 "blobfs_detect", 00:04:29.553 "blobfs_set_cache_size", 00:04:29.553 "bdev_zone_block_delete", 00:04:29.553 "bdev_zone_block_create", 00:04:29.553 "bdev_delay_delete", 00:04:29.553 "bdev_delay_create", 00:04:29.553 "bdev_delay_update_latency", 00:04:29.553 "bdev_split_delete", 00:04:29.553 "bdev_split_create", 00:04:29.553 "bdev_error_inject_error", 00:04:29.553 "bdev_error_delete", 00:04:29.553 "bdev_error_create", 00:04:29.553 "bdev_raid_set_options", 00:04:29.553 "bdev_raid_remove_base_bdev", 00:04:29.553 "bdev_raid_add_base_bdev", 00:04:29.553 "bdev_raid_delete", 00:04:29.553 "bdev_raid_create", 00:04:29.553 "bdev_raid_get_bdevs", 00:04:29.553 "bdev_lvol_set_parent_bdev", 00:04:29.553 "bdev_lvol_set_parent", 00:04:29.553 "bdev_lvol_check_shallow_copy", 00:04:29.553 "bdev_lvol_start_shallow_copy", 00:04:29.553 "bdev_lvol_grow_lvstore", 00:04:29.553 "bdev_lvol_get_lvols", 00:04:29.553 "bdev_lvol_get_lvstores", 00:04:29.553 "bdev_lvol_delete", 00:04:29.553 "bdev_lvol_set_read_only", 00:04:29.553 "bdev_lvol_resize", 00:04:29.553 "bdev_lvol_decouple_parent", 00:04:29.553 "bdev_lvol_inflate", 00:04:29.553 "bdev_lvol_rename", 00:04:29.553 "bdev_lvol_clone_bdev", 00:04:29.553 "bdev_lvol_clone", 00:04:29.553 "bdev_lvol_snapshot", 00:04:29.553 "bdev_lvol_create", 00:04:29.553 "bdev_lvol_delete_lvstore", 00:04:29.553 "bdev_lvol_rename_lvstore", 00:04:29.553 "bdev_lvol_create_lvstore", 00:04:29.553 "bdev_passthru_delete", 00:04:29.553 "bdev_passthru_create", 00:04:29.553 "bdev_nvme_send_cmd", 00:04:29.553 "bdev_nvme_get_path_iostat", 00:04:29.553 "bdev_nvme_get_mdns_discovery_info", 00:04:29.553 "bdev_nvme_stop_mdns_discovery", 00:04:29.553 "bdev_nvme_start_mdns_discovery", 00:04:29.553 "bdev_nvme_set_multipath_policy", 00:04:29.553 "bdev_nvme_set_preferred_path", 00:04:29.553 "bdev_nvme_get_io_paths", 00:04:29.553 "bdev_nvme_remove_error_injection", 00:04:29.553 "bdev_nvme_add_error_injection", 00:04:29.553 "bdev_nvme_get_discovery_info", 00:04:29.553 "bdev_nvme_stop_discovery", 00:04:29.553 "bdev_nvme_start_discovery", 00:04:29.553 "bdev_nvme_get_controller_health_info", 00:04:29.553 "bdev_nvme_disable_controller", 00:04:29.553 "bdev_nvme_enable_controller", 00:04:29.553 "bdev_nvme_reset_controller", 00:04:29.553 "bdev_nvme_get_transport_statistics", 00:04:29.553 "bdev_nvme_apply_firmware", 00:04:29.553 "bdev_nvme_detach_controller", 00:04:29.553 "bdev_nvme_get_controllers", 00:04:29.553 "bdev_nvme_attach_controller", 00:04:29.553 "bdev_nvme_set_hotplug", 00:04:29.553 "bdev_nvme_set_options", 00:04:29.553 "bdev_null_resize", 00:04:29.553 "bdev_null_delete", 00:04:29.553 "bdev_null_create", 00:04:29.553 "bdev_malloc_delete", 00:04:29.553 "bdev_malloc_create" 00:04:29.553 ] 00:04:29.553 18:18:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.553 18:18:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:29.553 18:18:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46335 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 46335 ']' 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 46335 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps -c -o command 46335 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # tail -1 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:29.553 killing process with pid 46335 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46335' 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 46335 00:04:29.553 18:18:21 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 46335 00:04:29.812 00:04:29.812 real 0m1.820s 00:04:29.812 user 0m2.602s 00:04:29.812 sys 0m0.859s 00:04:29.812 ************************************ 00:04:29.812 END TEST spdkcli_tcp 00:04:29.812 ************************************ 00:04:29.812 18:18:22 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.812 18:18:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.812 18:18:22 -- common/autotest_common.sh@1142 -- # return 0 00:04:29.812 18:18:22 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:29.812 18:18:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.812 18:18:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.812 18:18:22 -- common/autotest_common.sh@10 -- # set +x 00:04:29.812 ************************************ 00:04:29.812 START TEST dpdk_mem_utility 00:04:29.812 ************************************ 00:04:29.812 18:18:22 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.071 * Looking for test storage... 00:04:30.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:30.071 18:18:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:30.071 18:18:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46414 00:04:30.071 18:18:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46414 00:04:30.071 18:18:22 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 46414 ']' 00:04:30.071 18:18:22 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.071 18:18:22 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.071 18:18:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.071 18:18:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.071 18:18:22 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.071 18:18:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:30.071 [2024-07-15 18:18:22.233862] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:30.071 [2024-07-15 18:18:22.234029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:30.640 EAL: TSC is not safe to use in SMP mode 00:04:30.640 EAL: TSC is not invariant 00:04:30.640 [2024-07-15 18:18:22.839801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.640 [2024-07-15 18:18:22.976669] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:30.640 [2024-07-15 18:18:22.979662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.207 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.207 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:31.207 18:18:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:31.207 18:18:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:31.207 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.207 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.207 { 00:04:31.207 "filename": "/tmp/spdk_mem_dump.txt" 00:04:31.207 } 00:04:31.207 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.207 18:18:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:31.207 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:31.207 1 heaps totaling size 2048.000000 MiB 00:04:31.207 size: 2048.000000 MiB heap id: 0 00:04:31.207 end heaps---------- 00:04:31.207 8 mempools totaling size 592.563660 MiB 00:04:31.207 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:31.207 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:31.207 size: 84.500549 MiB name: bdev_io_46414 00:04:31.207 size: 51.008362 MiB name: evtpool_46414 00:04:31.207 size: 50.000549 MiB name: msgpool_46414 00:04:31.207 size: 21.758911 MiB name: PDU_Pool 00:04:31.207 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:31.207 size: 0.026123 MiB name: Session_Pool 00:04:31.207 end mempools------- 00:04:31.207 6 memzones totaling size 4.142822 MiB 00:04:31.207 size: 1.000366 MiB name: RG_ring_0_46414 00:04:31.207 size: 1.000366 MiB name: RG_ring_1_46414 00:04:31.207 size: 1.000366 MiB name: RG_ring_4_46414 00:04:31.207 size: 1.000366 MiB name: RG_ring_5_46414 00:04:31.207 size: 0.125366 MiB name: RG_ring_2_46414 00:04:31.207 size: 0.015991 MiB name: RG_ring_3_46414 00:04:31.207 end memzones------- 00:04:31.207 18:18:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:31.207 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 5 00:04:31.207 list of free elements. size: 1254.072021 MiB 00:04:31.207 element at address: 0x1060000000 with size: 1127.550476 MiB 00:04:31.207 element at address: 0x1100000000 with size: 88.694702 MiB 00:04:31.207 element at address: 0x10c0000000 with size: 26.986328 MiB 00:04:31.207 element at address: 0x10e0000000 with size: 10.714783 MiB 00:04:31.207 element at address: 0x10e2700000 with size: 0.125732 MiB 00:04:31.207 list of standard malloc elements. size: 197.217834 MiB 00:04:31.207 element at address: 0x10e7bfff80 with size: 132.000122 MiB 00:04:31.207 element at address: 0x11058b5f80 with size: 64.000122 MiB 00:04:31.207 element at address: 0x10e25fff80 with size: 1.000122 MiB 00:04:31.207 element at address: 0x110ffd9f00 with size: 0.140747 MiB 00:04:31.207 element at address: 0x10e276fc80 with size: 0.062622 MiB 00:04:31.207 element at address: 0x110fffdf80 with size: 0.007935 MiB 00:04:31.207 element at address: 0x11098b6480 with size: 0.000305 MiB 00:04:31.207 element at address: 0x10e2720300 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e27203c0 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e2720480 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e2720540 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e2720600 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e2727200 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e2727400 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e27274c0 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e272f780 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e272f840 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e272f900 with size: 0.000183 MiB 00:04:31.207 element at address: 0x10e276fbc0 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b6000 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b60c0 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b6180 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b6240 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b6300 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b63c0 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b65c0 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b6680 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b6880 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098b6940 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098d6c00 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11098d6cc0 with size: 0.000183 MiB 00:04:31.207 element at address: 0x11099d6f80 with size: 0.000183 MiB 00:04:31.208 element at address: 0x1109ad7240 with size: 0.000183 MiB 00:04:31.208 element at address: 0x1109ad7300 with size: 0.000183 MiB 00:04:31.208 element at address: 0x110ccd7640 with size: 0.000183 MiB 00:04:31.208 element at address: 0x110ccd7840 with size: 0.000183 MiB 00:04:31.208 element at address: 0x110ccd7900 with size: 0.000183 MiB 00:04:31.208 element at address: 0x110fed7c40 with size: 0.000183 MiB 00:04:31.208 element at address: 0x110ffd9e40 with size: 0.000183 MiB 00:04:31.208 list of memzone associated elements. size: 596.710144 MiB 00:04:31.208 element at address: 0x10c2cfcac0 with size: 211.013000 MiB 00:04:31.208 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:31.208 element at address: 0x10a678cec0 with size: 152.449524 MiB 00:04:31.208 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:31.208 element at address: 0x10e277fd00 with size: 84.000122 MiB 00:04:31.208 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46414_0 00:04:31.208 element at address: 0x110ccd79c0 with size: 48.000122 MiB 00:04:31.208 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46414_0 00:04:31.208 element at address: 0x1109ad73c0 with size: 48.000122 MiB 00:04:31.208 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46414_0 00:04:31.208 element at address: 0x10e0f3d780 with size: 20.250671 MiB 00:04:31.208 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:31.208 element at address: 0x10c1afc800 with size: 18.000671 MiB 00:04:31.208 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:31.208 element at address: 0x110fcd7a40 with size: 2.000488 MiB 00:04:31.208 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46414 00:04:31.208 element at address: 0x110cad7440 with size: 2.000488 MiB 00:04:31.208 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46414 00:04:31.208 element at address: 0x110fed7d00 with size: 1.008118 MiB 00:04:31.208 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46414 00:04:31.208 element at address: 0x10e23fdc40 with size: 1.008118 MiB 00:04:31.208 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:31.208 element at address: 0x10e0e3b640 with size: 1.008118 MiB 00:04:31.208 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:31.208 element at address: 0x10e0d39500 with size: 1.008118 MiB 00:04:31.208 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:31.208 element at address: 0x10e0c373c0 with size: 1.008118 MiB 00:04:31.208 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:31.208 element at address: 0x11099d7040 with size: 1.000488 MiB 00:04:31.208 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46414 00:04:31.208 element at address: 0x11098d6d80 with size: 1.000488 MiB 00:04:31.208 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46414 00:04:31.208 element at address: 0x10e24ffd80 with size: 1.000488 MiB 00:04:31.208 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46414 00:04:31.208 element at address: 0x10e0ab6fc0 with size: 1.000488 MiB 00:04:31.208 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46414 00:04:31.208 element at address: 0x10e7b7fd80 with size: 0.500488 MiB 00:04:31.208 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46414 00:04:31.208 element at address: 0x10e237da40 with size: 0.500488 MiB 00:04:31.208 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:31.208 element at address: 0x10e0bb71c0 with size: 0.500488 MiB 00:04:31.208 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:31.208 element at address: 0x10e272f9c0 with size: 0.250488 MiB 00:04:31.208 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:31.208 element at address: 0x11098b6a00 with size: 0.125488 MiB 00:04:31.208 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46414 00:04:31.208 element at address: 0x10e2727580 with size: 0.031738 MiB 00:04:31.208 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:31.208 element at address: 0x10e27206c0 with size: 0.023743 MiB 00:04:31.208 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:31.208 element at address: 0x11058b1d80 with size: 0.016113 MiB 00:04:31.208 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46414 00:04:31.208 element at address: 0x10e2726800 with size: 0.002441 MiB 00:04:31.208 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:31.208 element at address: 0x110ccd7700 with size: 0.000305 MiB 00:04:31.208 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46414 00:04:31.208 element at address: 0x11098b6740 with size: 0.000305 MiB 00:04:31.208 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46414 00:04:31.208 element at address: 0x10e27272c0 with size: 0.000305 MiB 00:04:31.208 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:31.208 18:18:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:31.208 18:18:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46414 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 46414 ']' 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 46414 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps -c -o command 46414 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # tail -1 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:31.208 killing process with pid 46414 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46414' 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 46414 00:04:31.208 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 46414 00:04:31.466 00:04:31.466 real 0m1.713s 00:04:31.466 user 0m1.688s 00:04:31.466 sys 0m0.808s 00:04:31.466 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.466 ************************************ 00:04:31.466 END TEST dpdk_mem_utility 00:04:31.466 ************************************ 00:04:31.466 18:18:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.724 18:18:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:31.724 18:18:23 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:31.724 18:18:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.724 18:18:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.724 18:18:23 -- common/autotest_common.sh@10 -- # set +x 00:04:31.724 ************************************ 00:04:31.724 START TEST event 00:04:31.724 ************************************ 00:04:31.724 18:18:23 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:31.724 * Looking for test storage... 00:04:31.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:31.724 18:18:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:31.724 18:18:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:31.724 18:18:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:31.724 18:18:24 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:31.724 18:18:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.724 18:18:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.724 ************************************ 00:04:31.724 START TEST event_perf 00:04:31.724 ************************************ 00:04:31.724 18:18:24 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:31.724 Running I/O for 1 seconds...[2024-07-15 18:18:24.047470] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:31.724 [2024-07-15 18:18:24.047751] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:32.661 EAL: TSC is not safe to use in SMP mode 00:04:32.661 EAL: TSC is not invariant 00:04:32.661 [2024-07-15 18:18:24.669959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.661 [2024-07-15 18:18:24.792092] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:32.661 [2024-07-15 18:18:24.792161] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:32.661 [2024-07-15 18:18:24.792175] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:32.661 [2024-07-15 18:18:24.792185] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:32.661 Running I/O for 1 seconds...[2024-07-15 18:18:24.797294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.662 [2024-07-15 18:18:24.797093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.662 [2024-07-15 18:18:24.797145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.662 [2024-07-15 18:18:24.797287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.599 00:04:33.599 lcore 0: 2619325 00:04:33.599 lcore 1: 2619324 00:04:33.599 lcore 2: 2619324 00:04:33.599 lcore 3: 2619325 00:04:33.599 done. 00:04:33.599 00:04:33.599 real 0m1.910s 00:04:33.599 user 0m4.246s 00:04:33.599 sys 0m0.656s 00:04:33.599 18:18:25 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.599 ************************************ 00:04:33.599 END TEST event_perf 00:04:33.599 ************************************ 00:04:33.599 18:18:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.858 18:18:25 event -- common/autotest_common.sh@1142 -- # return 0 00:04:33.858 18:18:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:33.858 18:18:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:33.858 18:18:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.858 18:18:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.858 ************************************ 00:04:33.858 START TEST event_reactor 00:04:33.858 ************************************ 00:04:33.858 18:18:25 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:33.858 [2024-07-15 18:18:26.002089] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:33.858 [2024-07-15 18:18:26.002306] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:34.426 EAL: TSC is not safe to use in SMP mode 00:04:34.426 EAL: TSC is not invariant 00:04:34.426 [2024-07-15 18:18:26.627324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.426 [2024-07-15 18:18:26.746841] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:34.426 [2024-07-15 18:18:26.749630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.803 test_start 00:04:35.803 oneshot 00:04:35.803 tick 100 00:04:35.803 tick 100 00:04:35.803 tick 250 00:04:35.803 tick 100 00:04:35.803 tick 100 00:04:35.803 tick 100 00:04:35.803 tick 250 00:04:35.803 tick 500 00:04:35.803 tick 100 00:04:35.803 tick 100 00:04:35.803 tick 250 00:04:35.803 tick 100 00:04:35.803 tick 100 00:04:35.803 test_end 00:04:35.803 00:04:35.803 real 0m1.882s 00:04:35.803 user 0m1.212s 00:04:35.803 sys 0m0.668s 00:04:35.803 18:18:27 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.803 18:18:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:35.803 ************************************ 00:04:35.803 END TEST event_reactor 00:04:35.803 ************************************ 00:04:35.803 18:18:27 event -- common/autotest_common.sh@1142 -- # return 0 00:04:35.803 18:18:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.803 18:18:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:35.803 18:18:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.803 18:18:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.803 ************************************ 00:04:35.803 START TEST event_reactor_perf 00:04:35.803 ************************************ 00:04:35.803 18:18:27 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.803 [2024-07-15 18:18:27.933946] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:35.803 [2024-07-15 18:18:27.934216] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:36.369 EAL: TSC is not safe to use in SMP mode 00:04:36.369 EAL: TSC is not invariant 00:04:36.369 [2024-07-15 18:18:28.541974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.369 [2024-07-15 18:18:28.651192] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:36.369 [2024-07-15 18:18:28.653520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.745 test_start 00:04:37.745 test_end 00:04:37.745 Performance: 3320627 events per second 00:04:37.745 00:04:37.745 real 0m1.846s 00:04:37.745 user 0m1.181s 00:04:37.745 sys 0m0.664s 00:04:37.745 18:18:29 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.745 ************************************ 00:04:37.745 END TEST event_reactor_perf 00:04:37.745 ************************************ 00:04:37.745 18:18:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.745 18:18:29 event -- common/autotest_common.sh@1142 -- # return 0 00:04:37.745 18:18:29 event -- event/event.sh@49 -- # uname -s 00:04:37.745 18:18:29 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:37.745 00:04:37.745 real 0m5.959s 00:04:37.745 user 0m6.805s 00:04:37.745 sys 0m2.195s 00:04:37.745 18:18:29 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.745 18:18:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.745 ************************************ 00:04:37.745 END TEST event 00:04:37.745 ************************************ 00:04:37.745 18:18:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.745 18:18:29 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:37.745 18:18:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.745 18:18:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.745 18:18:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.745 ************************************ 00:04:37.745 START TEST thread 00:04:37.745 ************************************ 00:04:37.745 18:18:29 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:37.745 * Looking for test storage... 00:04:37.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:37.746 18:18:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:37.746 18:18:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:37.746 18:18:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.746 18:18:30 thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.746 ************************************ 00:04:37.746 START TEST thread_poller_perf 00:04:37.746 ************************************ 00:04:37.746 18:18:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:37.746 [2024-07-15 18:18:30.018117] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:37.746 [2024-07-15 18:18:30.018285] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:38.312 EAL: TSC is not safe to use in SMP mode 00:04:38.312 EAL: TSC is not invariant 00:04:38.312 [2024-07-15 18:18:30.625851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.571 [2024-07-15 18:18:30.750762] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:38.571 [2024-07-15 18:18:30.753398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.571 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:39.947 ====================================== 00:04:39.947 busy:2201727373 (cyc) 00:04:39.947 total_run_count: 5351000 00:04:39.947 tsc_hz: 2200002400 (cyc) 00:04:39.947 ====================================== 00:04:39.947 poller_cost: 411 (cyc), 186 (nsec) 00:04:39.947 00:04:39.947 real 0m1.893s 00:04:39.947 user 0m1.252s 00:04:39.947 sys 0m0.642s 00:04:39.947 18:18:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.947 ************************************ 00:04:39.947 END TEST thread_poller_perf 00:04:39.947 ************************************ 00:04:39.947 18:18:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.947 18:18:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:39.948 18:18:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:39.948 18:18:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:39.948 18:18:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.948 18:18:31 thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.948 ************************************ 00:04:39.948 START TEST thread_poller_perf 00:04:39.948 ************************************ 00:04:39.948 18:18:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:39.948 [2024-07-15 18:18:31.961561] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:39.948 [2024-07-15 18:18:31.961805] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:40.207 EAL: TSC is not safe to use in SMP mode 00:04:40.207 EAL: TSC is not invariant 00:04:40.466 [2024-07-15 18:18:32.569234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.466 [2024-07-15 18:18:32.698565] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:40.466 [2024-07-15 18:18:32.702305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.466 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:41.899 ====================================== 00:04:41.899 busy:2202096119 (cyc) 00:04:41.899 total_run_count: 68767000 00:04:41.899 tsc_hz: 2200002400 (cyc) 00:04:41.899 ====================================== 00:04:41.899 poller_cost: 32 (cyc), 14 (nsec) 00:04:41.899 00:04:41.899 real 0m1.894s 00:04:41.899 user 0m1.247s 00:04:41.899 sys 0m0.644s 00:04:41.899 18:18:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.899 18:18:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.899 ************************************ 00:04:41.899 END TEST thread_poller_perf 00:04:41.899 ************************************ 00:04:41.899 18:18:33 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:41.899 18:18:33 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:04:41.899 18:18:33 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:41.899 18:18:33 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.899 18:18:33 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.899 18:18:33 thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.899 ************************************ 00:04:41.899 START TEST thread_spdk_lock 00:04:41.899 ************************************ 00:04:41.899 18:18:33 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:41.899 [2024-07-15 18:18:33.898204] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:41.899 [2024-07-15 18:18:33.898484] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:42.158 EAL: TSC is not safe to use in SMP mode 00:04:42.158 EAL: TSC is not invariant 00:04:42.158 [2024-07-15 18:18:34.474399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.416 [2024-07-15 18:18:34.580905] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:42.416 [2024-07-15 18:18:34.580960] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:42.416 [2024-07-15 18:18:34.583776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.416 [2024-07-15 18:18:34.583771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.674 [2024-07-15 18:18:35.021818] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:42.674 [2024-07-15 18:18:35.021877] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:42.674 [2024-07-15 18:18:35.021887] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x315be0 00:04:42.674 [2024-07-15 18:18:35.022455] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:42.674 [2024-07-15 18:18:35.022555] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:42.674 [2024-07-15 18:18:35.022565] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:42.932 Starting test contend 00:04:42.932 Worker Delay Wait us Hold us Total us 00:04:42.932 0 3 261037 162438 423476 00:04:42.932 1 5 163611 263422 427034 00:04:42.932 PASS test contend 00:04:42.932 Starting test hold_by_poller 00:04:42.932 PASS test hold_by_poller 00:04:42.932 Starting test hold_by_message 00:04:42.932 PASS test hold_by_message 00:04:42.932 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:04:42.932 100014 assertions passed 00:04:42.932 0 assertions failed 00:04:42.932 00:04:42.932 real 0m1.278s 00:04:42.932 user 0m1.108s 00:04:42.932 sys 0m0.606s 00:04:42.932 18:18:35 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.932 18:18:35 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:04:42.932 ************************************ 00:04:42.932 END TEST thread_spdk_lock 00:04:42.932 ************************************ 00:04:42.932 18:18:35 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:42.932 00:04:42.932 real 0m5.343s 00:04:42.932 user 0m3.764s 00:04:42.932 sys 0m2.049s 00:04:42.932 18:18:35 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.932 18:18:35 thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.932 ************************************ 00:04:42.932 END TEST thread 00:04:42.932 ************************************ 00:04:42.932 18:18:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.932 18:18:35 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:42.932 18:18:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.932 18:18:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.932 18:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:42.932 ************************************ 00:04:42.932 START TEST accel 00:04:42.932 ************************************ 00:04:42.932 18:18:35 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:43.191 * Looking for test storage... 00:04:43.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:04:43.191 18:18:35 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:43.191 18:18:35 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:43.191 18:18:35 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.191 18:18:35 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=46718 00:04:43.191 18:18:35 accel -- accel/accel.sh@63 -- # waitforlisten 46718 00:04:43.191 18:18:35 accel -- common/autotest_common.sh@829 -- # '[' -z 46718 ']' 00:04:43.191 18:18:35 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.191 18:18:35 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.191 18:18:35 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.191 18:18:35 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.191 18:18:35 accel -- common/autotest_common.sh@10 -- # set +x 00:04:43.191 18:18:35 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.NdDW6B 00:04:43.191 [2024-07-15 18:18:35.383807] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:43.191 [2024-07-15 18:18:35.384001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:43.758 EAL: TSC is not safe to use in SMP mode 00:04:43.758 EAL: TSC is not invariant 00:04:43.758 [2024-07-15 18:18:36.006790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.758 [2024-07-15 18:18:36.117138] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:44.016 18:18:36 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:44.016 18:18:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:44.016 18:18:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:44.016 18:18:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:44.016 18:18:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:44.016 18:18:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:44.016 18:18:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:44.016 18:18:36 accel -- accel/accel.sh@41 -- # jq -r . 00:04:44.016 [2024-07-15 18:18:36.127643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@862 -- # return 0 00:04:44.275 18:18:36 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:44.275 18:18:36 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:44.275 18:18:36 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:44.275 18:18:36 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:44.275 18:18:36 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:44.275 18:18:36 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:44.275 18:18:36 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@10 -- # set +x 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # IFS== 00:04:44.275 18:18:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:44.275 18:18:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:44.275 18:18:36 accel -- accel/accel.sh@75 -- # killprocess 46718 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@948 -- # '[' -z 46718 ']' 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@952 -- # kill -0 46718 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@953 -- # uname 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@956 -- # ps -c -o command 46718 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@956 -- # tail -1 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:44.275 killing process with pid 46718 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46718' 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@967 -- # kill 46718 00:04:44.275 18:18:36 accel -- common/autotest_common.sh@972 -- # wait 46718 00:04:44.533 18:18:36 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:44.533 18:18:36 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:44.533 18:18:36 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:44.533 18:18:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.533 18:18:36 accel -- common/autotest_common.sh@10 -- # set +x 00:04:44.533 18:18:36 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:44.533 18:18:36 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FPfEHb -h 00:04:44.533 18:18:36 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.533 18:18:36 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:44.533 18:18:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:44.533 18:18:36 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:44.533 18:18:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:44.534 18:18:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.534 18:18:36 accel -- common/autotest_common.sh@10 -- # set +x 00:04:44.534 ************************************ 00:04:44.534 START TEST accel_missing_filename 00:04:44.534 ************************************ 00:04:44.534 18:18:36 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:44.534 18:18:36 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:44.534 18:18:36 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:44.534 18:18:36 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:44.534 18:18:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.534 18:18:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:44.534 18:18:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.534 18:18:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:44.534 18:18:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.2cAPCB -t 1 -w compress 00:04:44.534 [2024-07-15 18:18:36.817301] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:44.534 [2024-07-15 18:18:36.817563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:45.099 EAL: TSC is not safe to use in SMP mode 00:04:45.099 EAL: TSC is not invariant 00:04:45.099 [2024-07-15 18:18:37.433030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.357 [2024-07-15 18:18:37.538237] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:45.357 18:18:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:45.357 18:18:37 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:45.357 18:18:37 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:45.357 18:18:37 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:45.357 18:18:37 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:45.357 18:18:37 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:45.357 18:18:37 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:45.357 18:18:37 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:45.357 [2024-07-15 18:18:37.549315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.357 [2024-07-15 18:18:37.551834] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:45.357 [2024-07-15 18:18:37.591742] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:45.615 A filename is required. 00:04:45.615 18:18:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:45.615 18:18:37 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:45.615 18:18:37 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:45.615 18:18:37 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:45.615 18:18:37 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:45.615 18:18:37 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:45.615 00:04:45.615 real 0m0.933s 00:04:45.615 user 0m0.269s 00:04:45.615 sys 0m0.667s 00:04:45.615 18:18:37 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.615 18:18:37 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:45.615 ************************************ 00:04:45.615 END TEST accel_missing_filename 00:04:45.615 ************************************ 00:04:45.616 18:18:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:45.616 18:18:37 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:45.616 18:18:37 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:45.616 18:18:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.616 18:18:37 accel -- common/autotest_common.sh@10 -- # set +x 00:04:45.616 ************************************ 00:04:45.616 START TEST accel_compress_verify 00:04:45.616 ************************************ 00:04:45.616 18:18:37 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:45.616 18:18:37 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:45.616 18:18:37 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:45.616 18:18:37 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:45.616 18:18:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.616 18:18:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:45.616 18:18:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.616 18:18:37 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:45.616 18:18:37 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.SdAs0L -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:45.616 [2024-07-15 18:18:37.796508] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:45.616 [2024-07-15 18:18:37.796703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:46.181 EAL: TSC is not safe to use in SMP mode 00:04:46.181 EAL: TSC is not invariant 00:04:46.181 [2024-07-15 18:18:38.408305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.181 [2024-07-15 18:18:38.516922] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:46.181 18:18:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:46.181 18:18:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:46.181 18:18:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:46.181 18:18:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:46.181 18:18:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:46.181 18:18:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:46.181 18:18:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:46.181 18:18:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:46.181 [2024-07-15 18:18:38.525115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.181 [2024-07-15 18:18:38.527629] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:46.440 [2024-07-15 18:18:38.567578] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:46.440 00:04:46.440 Compression does not support the verify option, aborting. 00:04:46.440 18:18:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:04:46.440 18:18:38 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.440 18:18:38 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:04:46.440 18:18:38 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:46.440 18:18:38 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:46.440 18:18:38 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.440 00:04:46.440 real 0m0.931s 00:04:46.440 user 0m0.265s 00:04:46.440 sys 0m0.664s 00:04:46.440 18:18:38 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.440 18:18:38 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:46.440 ************************************ 00:04:46.440 END TEST accel_compress_verify 00:04:46.440 ************************************ 00:04:46.440 18:18:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:46.440 18:18:38 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:46.440 18:18:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:46.440 18:18:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.440 18:18:38 accel -- common/autotest_common.sh@10 -- # set +x 00:04:46.440 ************************************ 00:04:46.440 START TEST accel_wrong_workload 00:04:46.440 ************************************ 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:46.440 18:18:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.XeJiec -t 1 -w foobar 00:04:46.440 Unsupported workload type: foobar 00:04:46.440 [2024-07-15 18:18:38.772405] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:46.440 accel_perf options: 00:04:46.440 [-h help message] 00:04:46.440 [-q queue depth per core] 00:04:46.440 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:46.440 [-T number of threads per core 00:04:46.440 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:46.440 [-t time in seconds] 00:04:46.440 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:46.440 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:46.440 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:46.440 [-l for compress/decompress workloads, name of uncompressed input file 00:04:46.440 [-S for crc32c workload, use this seed value (default 0) 00:04:46.440 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:46.440 [-f for fill workload, use this BYTE value (default 255) 00:04:46.440 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:46.440 [-y verify result if this switch is on] 00:04:46.440 [-a tasks to allocate per core (default: same value as -q)] 00:04:46.440 Can be used to spread operations across a wider range of memory. 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.440 00:04:46.440 real 0m0.010s 00:04:46.440 user 0m0.007s 00:04:46.440 sys 0m0.003s 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.440 18:18:38 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:46.440 ************************************ 00:04:46.440 END TEST accel_wrong_workload 00:04:46.440 ************************************ 00:04:46.699 18:18:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:46.699 18:18:38 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:46.699 18:18:38 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:46.699 18:18:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.699 18:18:38 accel -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 ************************************ 00:04:46.699 START TEST accel_negative_buffers 00:04:46.699 ************************************ 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:46.699 18:18:38 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.mAu65z -t 1 -w xor -y -x -1 00:04:46.699 -x option must be non-negative. 00:04:46.699 accel_perf options: 00:04:46.699 [-h help message] 00:04:46.699 [-q queue depth per core] 00:04:46.699 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:46.699 [-T number of threads per core 00:04:46.699 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:46.699 [-t time in seconds] 00:04:46.699 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:46.699 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:46.699 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:46.699 [-l for compress/decompress workloads, name of uncompressed input file 00:04:46.699 [-S for crc32c workload, use this seed value (default 0) 00:04:46.699 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:46.699 [-f for fill workload, use this BYTE value (default 255) 00:04:46.699 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:46.699 [-y verify result if this switch is on] 00:04:46.699 [-a tasks to allocate per core (default: same value as -q)] 00:04:46.699 Can be used to spread operations across a wider range of memory. 00:04:46.699 [2024-07-15 18:18:38.829044] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.699 00:04:46.699 real 0m0.010s 00:04:46.699 user 0m0.001s 00:04:46.699 sys 0m0.010s 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.699 18:18:38 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 ************************************ 00:04:46.699 END TEST accel_negative_buffers 00:04:46.699 ************************************ 00:04:46.699 18:18:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:46.699 18:18:38 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:46.699 18:18:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:46.699 18:18:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.699 18:18:38 accel -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 ************************************ 00:04:46.699 START TEST accel_crc32c 00:04:46.699 ************************************ 00:04:46.699 18:18:38 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:46.699 18:18:38 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:46.699 18:18:38 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:46.699 18:18:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:46.699 18:18:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:46.699 18:18:38 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:46.699 18:18:38 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.nl2zbh -t 1 -w crc32c -S 32 -y 00:04:46.699 [2024-07-15 18:18:38.886423] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:46.699 [2024-07-15 18:18:38.886677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:47.266 EAL: TSC is not safe to use in SMP mode 00:04:47.266 EAL: TSC is not invariant 00:04:47.266 [2024-07-15 18:18:39.499481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.266 [2024-07-15 18:18:39.608246] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:47.266 [2024-07-15 18:18:39.619746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:47.266 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.548 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.548 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:47.548 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.548 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.549 18:18:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.522 18:18:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:48.523 18:18:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:48.523 00:04:48.523 real 0m1.948s 00:04:48.523 user 0m1.295s 00:04:48.523 sys 0m0.657s 00:04:48.523 18:18:40 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.523 18:18:40 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:48.523 ************************************ 00:04:48.523 END TEST accel_crc32c 00:04:48.523 ************************************ 00:04:48.523 18:18:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:48.523 18:18:40 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:48.523 18:18:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:48.523 18:18:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.523 18:18:40 accel -- common/autotest_common.sh@10 -- # set +x 00:04:48.523 ************************************ 00:04:48.523 START TEST accel_crc32c_C2 00:04:48.523 ************************************ 00:04:48.523 18:18:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:48.523 18:18:40 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:48.523 18:18:40 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:48.523 18:18:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:48.523 18:18:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:48.523 18:18:40 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:48.523 18:18:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.c8rlIp -t 1 -w crc32c -y -C 2 00:04:48.523 [2024-07-15 18:18:40.867154] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:48.523 [2024-07-15 18:18:40.867461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:49.458 EAL: TSC is not safe to use in SMP mode 00:04:49.458 EAL: TSC is not invariant 00:04:49.458 [2024-07-15 18:18:41.465185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.458 [2024-07-15 18:18:41.569855] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:49.458 [2024-07-15 18:18:41.576876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.458 18:18:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:50.833 00:04:50.833 real 0m1.913s 00:04:50.833 user 0m1.291s 00:04:50.833 sys 0m0.628s 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.833 18:18:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:50.833 ************************************ 00:04:50.833 END TEST accel_crc32c_C2 00:04:50.833 ************************************ 00:04:50.833 18:18:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:50.833 18:18:42 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:50.833 18:18:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:50.833 18:18:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.833 18:18:42 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.833 ************************************ 00:04:50.833 START TEST accel_copy 00:04:50.833 ************************************ 00:04:50.833 18:18:42 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:04:50.833 18:18:42 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:50.833 18:18:42 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:50.833 18:18:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:50.833 18:18:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:50.833 18:18:42 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:50.833 18:18:42 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yz8HDa -t 1 -w copy -y 00:04:50.833 [2024-07-15 18:18:42.819125] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:50.833 [2024-07-15 18:18:42.819303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:51.091 EAL: TSC is not safe to use in SMP mode 00:04:51.091 EAL: TSC is not invariant 00:04:51.091 [2024-07-15 18:18:43.418075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.348 [2024-07-15 18:18:43.539037] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:51.348 [2024-07-15 18:18:43.551099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.348 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.349 18:18:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:52.720 18:18:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.720 00:04:52.720 real 0m1.948s 00:04:52.720 user 0m1.305s 00:04:52.720 sys 0m0.649s 00:04:52.720 18:18:44 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.720 18:18:44 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:52.720 ************************************ 00:04:52.720 END TEST accel_copy 00:04:52.720 ************************************ 00:04:52.720 18:18:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.720 18:18:44 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:52.720 18:18:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:52.720 18:18:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.720 18:18:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.721 ************************************ 00:04:52.721 START TEST accel_fill 00:04:52.721 ************************************ 00:04:52.721 18:18:44 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:52.721 18:18:44 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:52.721 18:18:44 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:52.721 18:18:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:52.721 18:18:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:52.721 18:18:44 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:52.721 18:18:44 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.4cGTq4 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:52.721 [2024-07-15 18:18:44.816043] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:52.721 [2024-07-15 18:18:44.816309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:53.287 EAL: TSC is not safe to use in SMP mode 00:04:53.287 EAL: TSC is not invariant 00:04:53.287 [2024-07-15 18:18:45.462588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.287 [2024-07-15 18:18:45.569057] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:53.287 [2024-07-15 18:18:45.580064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.287 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:53.288 18:18:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:54.660 18:18:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.660 00:04:54.660 real 0m1.941s 00:04:54.660 user 0m1.252s 00:04:54.660 sys 0m0.701s 00:04:54.660 18:18:46 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.660 18:18:46 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:54.660 ************************************ 00:04:54.660 END TEST accel_fill 00:04:54.660 ************************************ 00:04:54.660 18:18:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.660 18:18:46 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:54.660 18:18:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:54.660 18:18:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.660 18:18:46 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.660 ************************************ 00:04:54.660 START TEST accel_copy_crc32c 00:04:54.660 ************************************ 00:04:54.660 18:18:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:04:54.660 18:18:46 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:54.660 18:18:46 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:54.660 18:18:46 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:54.660 18:18:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 18:18:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 18:18:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.dDumjn -t 1 -w copy_crc32c -y 00:04:54.660 [2024-07-15 18:18:46.796699] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:54.660 [2024-07-15 18:18:46.796889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:55.238 EAL: TSC is not safe to use in SMP mode 00:04:55.238 EAL: TSC is not invariant 00:04:55.238 [2024-07-15 18:18:47.409893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.238 [2024-07-15 18:18:47.497650] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:55.238 [2024-07-15 18:18:47.507762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.238 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.239 18:18:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.613 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.614 00:04:56.614 real 0m1.886s 00:04:56.614 user 0m1.214s 00:04:56.614 sys 0m0.677s 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.614 18:18:48 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:56.614 ************************************ 00:04:56.614 END TEST accel_copy_crc32c 00:04:56.614 ************************************ 00:04:56.614 18:18:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:56.614 18:18:48 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:56.614 18:18:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:56.614 18:18:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.614 18:18:48 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.614 ************************************ 00:04:56.614 START TEST accel_copy_crc32c_C2 00:04:56.614 ************************************ 00:04:56.614 18:18:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:56.614 18:18:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:56.614 18:18:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:56.614 18:18:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:56.614 18:18:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.614 18:18:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.614 18:18:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Nh2bCQ -t 1 -w copy_crc32c -y -C 2 00:04:56.614 [2024-07-15 18:18:48.722670] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:56.614 [2024-07-15 18:18:48.722885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:57.181 EAL: TSC is not safe to use in SMP mode 00:04:57.181 EAL: TSC is not invariant 00:04:57.181 [2024-07-15 18:18:49.313154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.181 [2024-07-15 18:18:49.400846] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:57.181 [2024-07-15 18:18:49.411633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.181 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.182 18:18:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.557 00:04:58.557 real 0m1.869s 00:04:58.557 user 0m1.241s 00:04:58.557 sys 0m0.640s 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.557 18:18:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:58.557 ************************************ 00:04:58.557 END TEST accel_copy_crc32c_C2 00:04:58.557 ************************************ 00:04:58.557 18:18:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:58.557 18:18:50 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:58.557 18:18:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:58.557 18:18:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.557 18:18:50 accel -- common/autotest_common.sh@10 -- # set +x 00:04:58.557 ************************************ 00:04:58.557 START TEST accel_dualcast 00:04:58.557 ************************************ 00:04:58.557 18:18:50 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:04:58.557 18:18:50 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:04:58.557 18:18:50 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:04:58.557 18:18:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:58.557 18:18:50 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:58.557 18:18:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:58.557 18:18:50 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.dBQzWK -t 1 -w dualcast -y 00:04:58.557 [2024-07-15 18:18:50.636907] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:58.557 [2024-07-15 18:18:50.637089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:59.125 EAL: TSC is not safe to use in SMP mode 00:04:59.125 EAL: TSC is not invariant 00:04:59.125 [2024-07-15 18:18:51.274293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.125 [2024-07-15 18:18:51.406875] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:04:59.125 [2024-07-15 18:18:51.418914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.125 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:59.126 18:18:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:00.501 18:18:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.501 00:05:00.501 real 0m1.993s 00:05:00.501 user 0m1.327s 00:05:00.501 sys 0m0.681s 00:05:00.501 ************************************ 00:05:00.501 END TEST accel_dualcast 00:05:00.501 18:18:52 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.501 18:18:52 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:00.501 ************************************ 00:05:00.501 18:18:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:00.501 18:18:52 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:00.501 18:18:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:00.501 18:18:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.501 18:18:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.501 ************************************ 00:05:00.501 START TEST accel_compare 00:05:00.501 ************************************ 00:05:00.501 18:18:52 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:00.501 18:18:52 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:00.501 18:18:52 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:00.501 18:18:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:00.501 18:18:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:00.501 18:18:52 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:00.501 18:18:52 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.GnWj9A -t 1 -w compare -y 00:05:00.501 [2024-07-15 18:18:52.671493] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:00.501 [2024-07-15 18:18:52.671764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:01.078 EAL: TSC is not safe to use in SMP mode 00:05:01.078 EAL: TSC is not invariant 00:05:01.078 [2024-07-15 18:18:53.277269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.078 [2024-07-15 18:18:53.388512] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:01.078 [2024-07-15 18:18:53.399589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.078 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.079 18:18:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:02.453 18:18:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.453 00:05:02.453 real 0m1.938s 00:05:02.453 user 0m1.297s 00:05:02.453 sys 0m0.652s 00:05:02.453 18:18:54 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.453 18:18:54 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:02.453 ************************************ 00:05:02.453 END TEST accel_compare 00:05:02.453 ************************************ 00:05:02.453 18:18:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:02.453 18:18:54 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:02.453 18:18:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:02.453 18:18:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.453 18:18:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.453 ************************************ 00:05:02.453 START TEST accel_xor 00:05:02.453 ************************************ 00:05:02.453 18:18:54 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:02.453 18:18:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:02.453 18:18:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:02.453 18:18:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:02.453 18:18:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:02.453 18:18:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:02.453 18:18:54 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.mP2zEO -t 1 -w xor -y 00:05:02.453 [2024-07-15 18:18:54.650774] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:02.453 [2024-07-15 18:18:54.651059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:03.020 EAL: TSC is not safe to use in SMP mode 00:05:03.020 EAL: TSC is not invariant 00:05:03.020 [2024-07-15 18:18:55.243820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.020 [2024-07-15 18:18:55.362503] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:03.020 [2024-07-15 18:18:55.373954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.020 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.279 18:18:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.651 00:05:04.651 real 0m1.948s 00:05:04.651 user 0m1.311s 00:05:04.651 sys 0m0.647s 00:05:04.651 18:18:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.651 18:18:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:04.651 ************************************ 00:05:04.651 END TEST accel_xor 00:05:04.651 ************************************ 00:05:04.651 18:18:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:04.651 18:18:56 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:04.651 18:18:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:04.651 18:18:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.651 18:18:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.651 ************************************ 00:05:04.651 START TEST accel_xor 00:05:04.651 ************************************ 00:05:04.651 18:18:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:04.651 18:18:56 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.PRh0SP -t 1 -w xor -y -x 3 00:05:04.651 [2024-07-15 18:18:56.643791] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:04.651 [2024-07-15 18:18:56.644126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:04.909 EAL: TSC is not safe to use in SMP mode 00:05:04.909 EAL: TSC is not invariant 00:05:04.909 [2024-07-15 18:18:57.261761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.167 [2024-07-15 18:18:57.374492] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:05.167 [2024-07-15 18:18:57.385217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.167 18:18:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:06.541 18:18:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.541 00:05:06.541 real 0m1.948s 00:05:06.541 user 0m1.290s 00:05:06.541 sys 0m0.668s 00:05:06.541 18:18:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.541 ************************************ 00:05:06.541 END TEST accel_xor 00:05:06.541 18:18:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:06.541 ************************************ 00:05:06.541 18:18:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.541 18:18:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:06.541 18:18:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:06.541 18:18:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.541 18:18:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.541 ************************************ 00:05:06.541 START TEST accel_dif_verify 00:05:06.541 ************************************ 00:05:06.541 18:18:58 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:06.541 18:18:58 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:06.541 18:18:58 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:06.541 18:18:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.541 18:18:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.541 18:18:58 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:06.541 18:18:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.aifwDA -t 1 -w dif_verify 00:05:06.541 [2024-07-15 18:18:58.631496] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:06.541 [2024-07-15 18:18:58.631739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:07.108 EAL: TSC is not safe to use in SMP mode 00:05:07.108 EAL: TSC is not invariant 00:05:07.108 [2024-07-15 18:18:59.230611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.108 [2024-07-15 18:18:59.339033] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:07.108 [2024-07-15 18:18:59.349391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.108 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.109 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.109 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.109 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.109 18:18:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.109 18:18:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.109 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.109 18:18:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:08.484 18:19:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.484 00:05:08.484 real 0m1.923s 00:05:08.484 user 0m1.276s 00:05:08.484 sys 0m0.655s 00:05:08.484 18:19:00 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.484 ************************************ 00:05:08.484 END TEST accel_dif_verify 00:05:08.484 18:19:00 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:08.484 ************************************ 00:05:08.484 18:19:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:08.484 18:19:00 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:08.484 18:19:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:08.484 18:19:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.484 18:19:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.484 ************************************ 00:05:08.484 START TEST accel_dif_generate 00:05:08.484 ************************************ 00:05:08.484 18:19:00 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:08.484 18:19:00 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:08.484 18:19:00 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:08.484 18:19:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.484 18:19:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.484 18:19:00 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:08.484 18:19:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.03wlTw -t 1 -w dif_generate 00:05:08.484 [2024-07-15 18:19:00.601269] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:08.484 [2024-07-15 18:19:00.601507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:09.051 EAL: TSC is not safe to use in SMP mode 00:05:09.051 EAL: TSC is not invariant 00:05:09.051 [2024-07-15 18:19:01.204742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.051 [2024-07-15 18:19:01.313304] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:09.051 [2024-07-15 18:19:01.324306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.051 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.052 18:19:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:10.464 18:19:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.464 00:05:10.464 real 0m1.931s 00:05:10.464 user 0m1.294s 00:05:10.464 sys 0m0.645s 00:05:10.464 18:19:02 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.464 18:19:02 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:10.464 ************************************ 00:05:10.464 END TEST accel_dif_generate 00:05:10.464 ************************************ 00:05:10.464 18:19:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.464 18:19:02 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:10.464 18:19:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:10.464 18:19:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.464 18:19:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.464 ************************************ 00:05:10.464 START TEST accel_dif_generate_copy 00:05:10.464 ************************************ 00:05:10.464 18:19:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:10.464 18:19:02 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:10.464 18:19:02 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:10.464 18:19:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.464 18:19:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.464 18:19:02 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:10.464 18:19:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.IFFjDr -t 1 -w dif_generate_copy 00:05:10.464 [2024-07-15 18:19:02.575671] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:10.464 [2024-07-15 18:19:02.575885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:11.032 EAL: TSC is not safe to use in SMP mode 00:05:11.032 EAL: TSC is not invariant 00:05:11.032 [2024-07-15 18:19:03.156760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.032 [2024-07-15 18:19:03.273069] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:11.032 [2024-07-15 18:19:03.283798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.032 18:19:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.406 00:05:12.406 real 0m1.910s 00:05:12.406 user 0m1.306s 00:05:12.406 sys 0m0.617s 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.406 ************************************ 00:05:12.406 END TEST accel_dif_generate_copy 00:05:12.406 ************************************ 00:05:12.406 18:19:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:12.406 18:19:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.406 18:19:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:12.406 18:19:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:12.406 18:19:04 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:12.406 18:19:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.406 18:19:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.406 ************************************ 00:05:12.406 START TEST accel_comp 00:05:12.406 ************************************ 00:05:12.406 18:19:04 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:12.406 18:19:04 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:12.406 18:19:04 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:12.406 18:19:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.406 18:19:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.406 18:19:04 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:12.406 18:19:04 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yoc62u -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:12.406 [2024-07-15 18:19:04.529841] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:12.406 [2024-07-15 18:19:04.530107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:12.973 EAL: TSC is not safe to use in SMP mode 00:05:12.973 EAL: TSC is not invariant 00:05:12.973 [2024-07-15 18:19:05.115365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.973 [2024-07-15 18:19:05.221371] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:12.973 [2024-07-15 18:19:05.231650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.973 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.974 18:19:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.369 18:19:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.369 18:19:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:14.370 18:19:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.370 00:05:14.370 real 0m1.907s 00:05:14.370 user 0m1.283s 00:05:14.370 sys 0m0.630s 00:05:14.370 18:19:06 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.370 18:19:06 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:14.370 ************************************ 00:05:14.370 END TEST accel_comp 00:05:14.370 ************************************ 00:05:14.370 18:19:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.370 18:19:06 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.370 18:19:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:14.370 18:19:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.370 18:19:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.370 ************************************ 00:05:14.370 START TEST accel_decomp 00:05:14.370 ************************************ 00:05:14.370 18:19:06 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.370 18:19:06 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:14.370 18:19:06 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:14.370 18:19:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.370 18:19:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.370 18:19:06 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.370 18:19:06 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.z91DAq -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.370 [2024-07-15 18:19:06.479661] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:14.370 [2024-07-15 18:19:06.479887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:14.936 EAL: TSC is not safe to use in SMP mode 00:05:14.936 EAL: TSC is not invariant 00:05:14.936 [2024-07-15 18:19:07.100795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.936 [2024-07-15 18:19:07.207073] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:14.936 [2024-07-15 18:19:07.218170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.936 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.937 18:19:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:16.309 18:19:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.309 00:05:16.309 real 0m1.950s 00:05:16.309 user 0m1.262s 00:05:16.309 sys 0m0.690s 00:05:16.309 18:19:08 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.309 18:19:08 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:16.309 ************************************ 00:05:16.309 END TEST accel_decomp 00:05:16.309 ************************************ 00:05:16.309 18:19:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:16.309 18:19:08 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:16.309 18:19:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:16.309 18:19:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.309 18:19:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.309 ************************************ 00:05:16.309 START TEST accel_decomp_full 00:05:16.309 ************************************ 00:05:16.309 18:19:08 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:16.309 18:19:08 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:16.309 18:19:08 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:16.309 18:19:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.309 18:19:08 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:16.309 18:19:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.309 18:19:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.mMoLRV -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:16.309 [2024-07-15 18:19:08.466287] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:16.309 [2024-07-15 18:19:08.466499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:16.875 EAL: TSC is not safe to use in SMP mode 00:05:16.875 EAL: TSC is not invariant 00:05:16.875 [2024-07-15 18:19:09.090781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.875 [2024-07-15 18:19:09.202194] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:16.875 [2024-07-15 18:19:09.213085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.875 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.876 18:19:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.255 18:19:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.255 00:05:18.255 real 0m1.966s 00:05:18.255 user 0m1.319s 00:05:18.255 sys 0m0.661s 00:05:18.255 18:19:10 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.255 18:19:10 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:18.255 ************************************ 00:05:18.255 END TEST accel_decomp_full 00:05:18.255 ************************************ 00:05:18.255 18:19:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.255 18:19:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:18.255 18:19:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:18.255 18:19:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.255 18:19:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.255 ************************************ 00:05:18.255 START TEST accel_decomp_mcore 00:05:18.255 ************************************ 00:05:18.255 18:19:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:18.255 18:19:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:18.255 18:19:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:18.255 18:19:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.255 18:19:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.255 18:19:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:18.255 18:19:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.VZSKDK -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:18.255 [2024-07-15 18:19:10.480206] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:18.255 [2024-07-15 18:19:10.480390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:18.823 EAL: TSC is not safe to use in SMP mode 00:05:18.823 EAL: TSC is not invariant 00:05:18.823 [2024-07-15 18:19:11.103362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.083 [2024-07-15 18:19:11.212533] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:19.083 [2024-07-15 18:19:11.212589] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:19.083 [2024-07-15 18:19:11.212598] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:19.083 [2024-07-15 18:19:11.212607] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:19.083 [2024-07-15 18:19:11.225696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.083 [2024-07-15 18:19:11.225580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.083 [2024-07-15 18:19:11.225640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.083 [2024-07-15 18:19:11.225689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.083 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.084 18:19:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.464 00:05:20.464 real 0m1.954s 00:05:20.464 user 0m4.453s 00:05:20.464 sys 0m0.656s 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.464 ************************************ 00:05:20.464 END TEST accel_decomp_mcore 00:05:20.464 18:19:12 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:20.464 ************************************ 00:05:20.464 18:19:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.464 18:19:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.464 18:19:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:20.464 18:19:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.464 18:19:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.464 ************************************ 00:05:20.464 START TEST accel_decomp_full_mcore 00:05:20.464 ************************************ 00:05:20.464 18:19:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.464 18:19:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:20.464 18:19:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:20.464 18:19:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.464 18:19:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.464 18:19:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.464 18:19:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.CLg06E -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.464 [2024-07-15 18:19:12.470494] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:20.464 [2024-07-15 18:19:12.470738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:20.723 EAL: TSC is not safe to use in SMP mode 00:05:20.723 EAL: TSC is not invariant 00:05:20.723 [2024-07-15 18:19:13.076484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.982 [2024-07-15 18:19:13.180618] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:20.982 [2024-07-15 18:19:13.180670] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:20.982 [2024-07-15 18:19:13.180679] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:20.982 [2024-07-15 18:19:13.180687] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:20.982 [2024-07-15 18:19:13.193021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.982 [2024-07-15 18:19:13.192931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.982 [2024-07-15 18:19:13.193017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.982 [2024-07-15 18:19:13.192979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.982 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.983 18:19:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.358 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.359 00:05:22.359 real 0m1.946s 00:05:22.359 user 0m4.486s 00:05:22.359 sys 0m0.640s 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.359 18:19:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:22.359 ************************************ 00:05:22.359 END TEST accel_decomp_full_mcore 00:05:22.359 ************************************ 00:05:22.359 18:19:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.359 18:19:14 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:22.359 18:19:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:22.359 18:19:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.359 18:19:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.359 ************************************ 00:05:22.359 START TEST accel_decomp_mthread 00:05:22.359 ************************************ 00:05:22.359 18:19:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:22.359 18:19:14 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:22.359 18:19:14 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:22.359 18:19:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.359 18:19:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.359 18:19:14 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:22.359 18:19:14 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.QPOwWP -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:22.359 [2024-07-15 18:19:14.454336] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:22.359 [2024-07-15 18:19:14.454596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:22.927 EAL: TSC is not safe to use in SMP mode 00:05:22.927 EAL: TSC is not invariant 00:05:22.927 [2024-07-15 18:19:15.061035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.927 [2024-07-15 18:19:15.181693] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:22.927 [2024-07-15 18:19:15.192992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:22.927 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.928 18:19:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.412 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.412 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.412 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.413 00:05:24.413 real 0m1.949s 00:05:24.413 user 0m1.294s 00:05:24.413 sys 0m0.658s 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.413 ************************************ 00:05:24.413 END TEST accel_decomp_mthread 00:05:24.413 ************************************ 00:05:24.413 18:19:16 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 18:19:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.413 18:19:16 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:24.413 18:19:16 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:24.413 18:19:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.413 18:19:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 ************************************ 00:05:24.413 START TEST accel_decomp_full_mthread 00:05:24.413 ************************************ 00:05:24.413 18:19:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:24.413 18:19:16 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:24.413 18:19:16 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:24.413 18:19:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.413 18:19:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.413 18:19:16 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:24.413 18:19:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.JsrXlM -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:24.413 [2024-07-15 18:19:16.445900] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:24.413 [2024-07-15 18:19:16.446099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:24.989 EAL: TSC is not safe to use in SMP mode 00:05:24.989 EAL: TSC is not invariant 00:05:24.989 [2024-07-15 18:19:17.049722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.989 [2024-07-15 18:19:17.155960] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:24.989 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:24.989 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.989 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:24.990 [2024-07-15 18:19:17.165582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.990 18:19:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.363 00:05:26.363 real 0m1.959s 00:05:26.363 user 0m1.326s 00:05:26.363 sys 0m0.643s 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.363 ************************************ 00:05:26.363 END TEST accel_decomp_full_mthread 00:05:26.363 ************************************ 00:05:26.363 18:19:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:26.363 18:19:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.363 18:19:18 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:26.363 18:19:18 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.jksvNH 00:05:26.363 18:19:18 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:26.363 18:19:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.363 18:19:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.363 ************************************ 00:05:26.363 START TEST accel_dif_functional_tests 00:05:26.363 ************************************ 00:05:26.363 18:19:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.jksvNH 00:05:26.363 [2024-07-15 18:19:18.442926] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:26.363 [2024-07-15 18:19:18.443120] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:26.930 EAL: TSC is not safe to use in SMP mode 00:05:26.930 EAL: TSC is not invariant 00:05:26.930 [2024-07-15 18:19:19.049649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.930 [2024-07-15 18:19:19.169833] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:26.930 [2024-07-15 18:19:19.169898] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:26.930 [2024-07-15 18:19:19.169910] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:26.930 18:19:19 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:26.930 18:19:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.930 18:19:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.930 18:19:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.930 18:19:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.930 18:19:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.930 18:19:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:26.930 18:19:19 accel -- accel/accel.sh@41 -- # jq -r . 00:05:26.930 [2024-07-15 18:19:19.182320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.930 [2024-07-15 18:19:19.182251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.930 [2024-07-15 18:19:19.182311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.930 00:05:26.930 00:05:26.930 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.930 http://cunit.sourceforge.net/ 00:05:26.930 00:05:26.930 00:05:26.930 Suite: accel_dif 00:05:26.930 Test: verify: DIF generated, GUARD check ...passed 00:05:26.930 Test: verify: DIF generated, APPTAG check ...passed 00:05:26.930 Test: verify: DIF generated, REFTAG check ...passed 00:05:26.930 Test: verify: DIF not generated, GUARD check ...passed 00:05:26.930 Test: verify: DIF not generated, APPTAG check ...passed 00:05:26.930 Test: verify: DIF not generated, REFTAG check ...passed 00:05:26.930 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:26.930 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:26.930 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:26.930 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:26.930 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:26.930 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:05:26.930 Test: verify copy: DIF generated, GUARD check ...passed 00:05:26.930 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:26.930 Test: verify copy: DIF generated, REFTAG check ...[2024-07-15 18:19:19.202449] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:26.930 [2024-07-15 18:19:19.202509] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:26.930 [2024-07-15 18:19:19.202542] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:26.930 [2024-07-15 18:19:19.202599] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:26.930 [2024-07-15 18:19:19.202689] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:26.930 passed 00:05:26.930 Test: verify copy: DIF not generated, GUARD check ...passed 00:05:26.930 Test: verify copy: DIF not generated, APPTAG check ...passed 00:05:26.930 Test: verify copy: DIF not generated, REFTAG check ...passed 00:05:26.930 Test: generate copy: DIF generated, GUARD check ...passed 00:05:26.930 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:26.930 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:26.930 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:26.930 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:26.930 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:26.930 Test: generate copy: iovecs-len validate ...[2024-07-15 18:19:19.202785] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:26.930 [2024-07-15 18:19:19.202813] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:26.930 [2024-07-15 18:19:19.202839] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:26.930 [2024-07-15 18:19:19.202972] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:26.930 passed 00:05:26.930 Test: generate copy: buffer alignment validate ...passed 00:05:26.930 00:05:26.930 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.930 suites 1 1 n/a 0 0 00:05:26.930 tests 26 26 26 0 0 00:05:26.930 asserts 115 115 115 0 n/a 00:05:26.930 00:05:26.930 Elapsed time = 0.016 seconds 00:05:27.187 00:05:27.187 real 0m0.992s 00:05:27.187 user 0m0.512s 00:05:27.187 sys 0m0.649s 00:05:27.187 18:19:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.187 18:19:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:27.187 ************************************ 00:05:27.187 END TEST accel_dif_functional_tests 00:05:27.187 ************************************ 00:05:27.187 18:19:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.187 18:19:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:27.187 00:05:27.187 real 0m44.211s 00:05:27.187 user 0m34.774s 00:05:27.187 sys 0m16.435s 00:05:27.187 18:19:19 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.187 18:19:19 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:27.187 18:19:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:27.187 18:19:19 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.187 18:19:19 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.187 18:19:19 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.187 18:19:19 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.187 ************************************ 00:05:27.187 END TEST accel 00:05:27.187 ************************************ 00:05:27.187 18:19:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.187 18:19:19 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.187 18:19:19 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.187 18:19:19 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.188 18:19:19 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.188 18:19:19 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.188 18:19:19 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.188 18:19:19 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.188 18:19:19 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:27.188 18:19:19 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.188 18:19:19 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.188 18:19:19 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:27.188 18:19:19 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.188 18:19:19 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.188 18:19:19 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:27.188 18:19:19 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:27.188 18:19:19 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:27.188 18:19:19 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:27.188 18:19:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.188 18:19:19 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:27.188 18:19:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.188 18:19:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.188 18:19:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.188 ************************************ 00:05:27.188 START TEST accel_rpc 00:05:27.188 ************************************ 00:05:27.188 18:19:19 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:27.446 * Looking for test storage... 00:05:27.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:27.446 18:19:19 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.446 18:19:19 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47488 00:05:27.446 18:19:19 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:27.446 18:19:19 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47488 00:05:27.446 18:19:19 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 47488 ']' 00:05:27.446 18:19:19 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.446 18:19:19 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.446 18:19:19 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.446 18:19:19 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.446 18:19:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.446 [2024-07-15 18:19:19.640639] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:27.446 [2024-07-15 18:19:19.640863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:28.013 EAL: TSC is not safe to use in SMP mode 00:05:28.013 EAL: TSC is not invariant 00:05:28.013 [2024-07-15 18:19:20.217265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.013 [2024-07-15 18:19:20.339664] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:28.013 [2024-07-15 18:19:20.342447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.580 18:19:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:28.580 18:19:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:28.580 18:19:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:28.580 18:19:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:28.580 18:19:20 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.580 ************************************ 00:05:28.580 START TEST accel_assign_opcode 00:05:28.580 ************************************ 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.580 [2024-07-15 18:19:20.706798] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.580 [2024-07-15 18:19:20.714784] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.580 software 00:05:28.580 00:05:28.580 real 0m0.075s 00:05:28.580 user 0m0.008s 00:05:28.580 sys 0m0.012s 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.580 18:19:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.580 ************************************ 00:05:28.580 END TEST accel_assign_opcode 00:05:28.580 ************************************ 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:28.580 18:19:20 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47488 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 47488 ']' 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 47488 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 47488 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@956 -- # tail -1 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:28.580 killing process with pid 47488 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47488' 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@967 -- # kill 47488 00:05:28.580 18:19:20 accel_rpc -- common/autotest_common.sh@972 -- # wait 47488 00:05:28.838 00:05:28.838 real 0m1.625s 00:05:28.838 user 0m1.475s 00:05:28.838 sys 0m0.798s 00:05:28.838 ************************************ 00:05:28.838 END TEST accel_rpc 00:05:28.838 ************************************ 00:05:28.838 18:19:21 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.838 18:19:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.838 18:19:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.838 18:19:21 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:28.838 18:19:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.838 18:19:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.838 18:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.838 ************************************ 00:05:28.838 START TEST app_cmdline 00:05:28.838 ************************************ 00:05:28.838 18:19:21 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:29.096 * Looking for test storage... 00:05:29.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:29.096 18:19:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:29.096 18:19:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47570 00:05:29.096 18:19:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47570 00:05:29.096 18:19:21 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 47570 ']' 00:05:29.096 18:19:21 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.096 18:19:21 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.096 18:19:21 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.096 18:19:21 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.096 18:19:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.096 18:19:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:29.096 [2024-07-15 18:19:21.306488] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:29.096 [2024-07-15 18:19:21.306660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:29.661 EAL: TSC is not safe to use in SMP mode 00:05:29.661 EAL: TSC is not invariant 00:05:29.661 [2024-07-15 18:19:21.910566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.661 [2024-07-15 18:19:22.019851] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:29.919 [2024-07-15 18:19:22.022011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.177 18:19:22 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.177 18:19:22 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:30.177 18:19:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:30.435 { 00:05:30.435 "version": "SPDK v24.09-pre git sha1 6c0846996", 00:05:30.435 "fields": { 00:05:30.435 "major": 24, 00:05:30.435 "minor": 9, 00:05:30.435 "patch": 0, 00:05:30.435 "suffix": "-pre", 00:05:30.435 "commit": "6c0846996" 00:05:30.435 } 00:05:30.435 } 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:30.435 18:19:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:30.435 18:19:22 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.691 request: 00:05:30.691 { 00:05:30.691 "method": "env_dpdk_get_mem_stats", 00:05:30.691 "req_id": 1 00:05:30.691 } 00:05:30.691 Got JSON-RPC error response 00:05:30.691 response: 00:05:30.691 { 00:05:30.691 "code": -32601, 00:05:30.691 "message": "Method not found" 00:05:30.691 } 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.691 18:19:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47570 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 47570 ']' 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 47570 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@956 -- # tail -1 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@956 -- # ps -c -o command 47570 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:30.691 killing process with pid 47570 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47570' 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@967 -- # kill 47570 00:05:30.691 18:19:22 app_cmdline -- common/autotest_common.sh@972 -- # wait 47570 00:05:30.948 00:05:30.948 real 0m2.113s 00:05:30.948 user 0m2.397s 00:05:30.948 sys 0m0.893s 00:05:30.948 18:19:23 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.948 18:19:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.948 ************************************ 00:05:30.948 END TEST app_cmdline 00:05:30.948 ************************************ 00:05:31.206 18:19:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.206 18:19:23 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:31.206 18:19:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.206 18:19:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.206 18:19:23 -- common/autotest_common.sh@10 -- # set +x 00:05:31.206 ************************************ 00:05:31.206 START TEST version 00:05:31.206 ************************************ 00:05:31.206 18:19:23 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:31.206 * Looking for test storage... 00:05:31.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:31.206 18:19:23 version -- app/version.sh@17 -- # get_header_version major 00:05:31.206 18:19:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.206 18:19:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.206 18:19:23 version -- app/version.sh@14 -- # cut -f2 00:05:31.206 18:19:23 version -- app/version.sh@17 -- # major=24 00:05:31.206 18:19:23 version -- app/version.sh@18 -- # get_header_version minor 00:05:31.206 18:19:23 version -- app/version.sh@14 -- # cut -f2 00:05:31.206 18:19:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.206 18:19:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.206 18:19:23 version -- app/version.sh@18 -- # minor=9 00:05:31.206 18:19:23 version -- app/version.sh@19 -- # get_header_version patch 00:05:31.206 18:19:23 version -- app/version.sh@14 -- # cut -f2 00:05:31.206 18:19:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.206 18:19:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.206 18:19:23 version -- app/version.sh@19 -- # patch=0 00:05:31.206 18:19:23 version -- app/version.sh@20 -- # get_header_version suffix 00:05:31.206 18:19:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.206 18:19:23 version -- app/version.sh@14 -- # cut -f2 00:05:31.206 18:19:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.206 18:19:23 version -- app/version.sh@20 -- # suffix=-pre 00:05:31.206 18:19:23 version -- app/version.sh@22 -- # version=24.9 00:05:31.206 18:19:23 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:31.206 18:19:23 version -- app/version.sh@28 -- # version=24.9rc0 00:05:31.206 18:19:23 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:31.206 18:19:23 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:31.206 18:19:23 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:31.206 18:19:23 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:31.206 00:05:31.206 real 0m0.207s 00:05:31.206 user 0m0.156s 00:05:31.206 sys 0m0.134s 00:05:31.206 18:19:23 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.206 18:19:23 version -- common/autotest_common.sh@10 -- # set +x 00:05:31.206 ************************************ 00:05:31.206 END TEST version 00:05:31.206 ************************************ 00:05:31.466 18:19:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.466 18:19:23 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:05:31.466 18:19:23 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:31.466 18:19:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.466 18:19:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.466 18:19:23 -- common/autotest_common.sh@10 -- # set +x 00:05:31.466 ************************************ 00:05:31.466 START TEST blockdev_general 00:05:31.466 ************************************ 00:05:31.466 18:19:23 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:31.466 * Looking for test storage... 00:05:31.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:31.466 18:19:23 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=47705 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 47705 00:05:31.466 18:19:23 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:05:31.466 18:19:23 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 47705 ']' 00:05:31.466 18:19:23 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.466 18:19:23 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.466 18:19:23 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.466 18:19:23 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.466 18:19:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:31.466 [2024-07-15 18:19:23.743486] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:31.466 [2024-07-15 18:19:23.743624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:32.031 EAL: TSC is not safe to use in SMP mode 00:05:32.031 EAL: TSC is not invariant 00:05:32.031 [2024-07-15 18:19:24.332210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.288 [2024-07-15 18:19:24.440430] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:32.288 [2024-07-15 18:19:24.442619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.544 18:19:24 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.544 18:19:24 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:05:32.544 18:19:24 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:05:32.544 18:19:24 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:05:32.544 18:19:24 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:05:32.544 18:19:24 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.544 18:19:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:32.544 [2024-07-15 18:19:24.876115] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:32.544 [2024-07-15 18:19:24.876176] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:32.544 00:05:32.544 [2024-07-15 18:19:24.884100] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:32.544 [2024-07-15 18:19:24.884124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:32.544 00:05:32.544 Malloc0 00:05:32.801 Malloc1 00:05:32.801 Malloc2 00:05:32.801 Malloc3 00:05:32.801 Malloc4 00:05:32.801 Malloc5 00:05:32.801 Malloc6 00:05:32.801 Malloc7 00:05:32.801 Malloc8 00:05:32.801 Malloc9 00:05:32.801 [2024-07-15 18:19:24.972107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:32.801 [2024-07-15 18:19:24.972162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.801 [2024-07-15 18:19:24.972186] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x125e483a980 00:05:32.801 [2024-07-15 18:19:24.972196] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.801 [2024-07-15 18:19:24.972552] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.801 [2024-07-15 18:19:24.972585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:32.801 TestPT 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:32.801 5000+0 records in 00:05:32.801 5000+0 records out 00:05:32.801 10240000 bytes transferred in 0.026771 secs (382503627 bytes/sec) 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:32.801 AIO0 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:32.801 18:19:25 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:05:32.801 18:19:25 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:05:32.802 18:19:25 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:05:32.802 18:19:25 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.802 18:19:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:33.060 18:19:25 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.060 18:19:25 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:05:33.060 18:19:25 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:05:33.061 18:19:25 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c3e73e81-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c3e73e81-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0b6dee01-3e99-e05c-92a2-eb4fd01fbce8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0b6dee01-3e99-e05c-92a2-eb4fd01fbce8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6811a84d-08f2-9d5a-b41d-311276d378b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6811a84d-08f2-9d5a-b41d-311276d378b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b8eecc5e-9e1f-7f52-b268-2cb1a386f9f8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b8eecc5e-9e1f-7f52-b268-2cb1a386f9f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "2bba8f9f-b516-2556-983e-2d367e2ab900"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2bba8f9f-b516-2556-983e-2d367e2ab900",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2ddf6ded-c88a-e955-9ded-66a6a531b73f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2ddf6ded-c88a-e955-9ded-66a6a531b73f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "45806506-5974-ca53-b5b5-7b4a8ccbcd74"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "45806506-5974-ca53-b5b5-7b4a8ccbcd74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a9139bbf-3c51-115e-9e07-c550694bc2e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9139bbf-3c51-115e-9e07-c550694bc2e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "42975f5c-caa0-df5f-9117-a76b20453630"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "42975f5c-caa0-df5f-9117-a76b20453630",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "688791e4-4a42-2657-bdca-1fbb61f09dda"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "688791e4-4a42-2657-bdca-1fbb61f09dda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "df75ff6f-6ad2-4f54-a0fa-7b8ed2b293fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "df75ff6f-6ad2-4f54-a0fa-7b8ed2b293fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "ed480dbb-afde-b053-a7c0-90c01b595a72"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ed480dbb-afde-b053-a7c0-90c01b595a72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c3f4b999-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3f4b999-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f4b999-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c3ec1ff5-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "c3ed5872-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c3ee90ff-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c3efc96f-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c3f71d26-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c3f71d26-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f71d26-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c3f101f2-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c3f23a83-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c3ffaa5a-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c3ffaa5a-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:33.061 18:19:25 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:05:33.061 18:19:25 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:05:33.061 18:19:25 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:05:33.061 18:19:25 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 47705 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 47705 ']' 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 47705 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@956 -- # ps -c -o command 47705 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@956 -- # tail -1 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:33.061 killing process with pid 47705 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47705' 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@967 -- # kill 47705 00:05:33.061 18:19:25 blockdev_general -- common/autotest_common.sh@972 -- # wait 47705 00:05:33.626 18:19:25 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:33.626 18:19:25 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:33.626 18:19:25 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.626 18:19:25 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.626 18:19:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:33.627 ************************************ 00:05:33.627 START TEST bdev_hello_world 00:05:33.627 ************************************ 00:05:33.627 18:19:25 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:33.627 [2024-07-15 18:19:25.764106] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:33.627 [2024-07-15 18:19:25.764428] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:34.194 EAL: TSC is not safe to use in SMP mode 00:05:34.194 EAL: TSC is not invariant 00:05:34.194 [2024-07-15 18:19:26.403810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.194 [2024-07-15 18:19:26.513816] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:34.194 [2024-07-15 18:19:26.516029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.452 [2024-07-15 18:19:26.575581] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:34.452 [2024-07-15 18:19:26.575635] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:34.452 [2024-07-15 18:19:26.583560] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:34.452 [2024-07-15 18:19:26.583589] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:34.452 [2024-07-15 18:19:26.591574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:34.452 [2024-07-15 18:19:26.591603] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:34.452 [2024-07-15 18:19:26.591612] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:34.452 [2024-07-15 18:19:26.639588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:34.452 [2024-07-15 18:19:26.639643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.452 [2024-07-15 18:19:26.639654] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2264efc36800 00:05:34.452 [2024-07-15 18:19:26.639663] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.452 [2024-07-15 18:19:26.640048] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.452 [2024-07-15 18:19:26.640069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:34.452 [2024-07-15 18:19:26.739754] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:34.452 [2024-07-15 18:19:26.739805] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:34.452 [2024-07-15 18:19:26.739820] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:34.452 [2024-07-15 18:19:26.739835] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:34.452 [2024-07-15 18:19:26.739849] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:34.452 [2024-07-15 18:19:26.739858] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:34.452 [2024-07-15 18:19:26.739869] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:34.452 00:05:34.452 [2024-07-15 18:19:26.739878] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:34.711 00:05:34.711 real 0m1.272s 00:05:34.711 user 0m0.594s 00:05:34.711 sys 0m0.676s 00:05:34.711 18:19:27 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.711 18:19:27 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:34.711 ************************************ 00:05:34.711 END TEST bdev_hello_world 00:05:34.711 ************************************ 00:05:34.711 18:19:27 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:34.711 18:19:27 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:05:34.711 18:19:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:34.711 18:19:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.711 18:19:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:34.968 ************************************ 00:05:34.968 START TEST bdev_bounds 00:05:34.968 ************************************ 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=47757 00:05:34.968 Process bdevio pid: 47757 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 47757' 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 47757 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 47757 ']' 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.968 18:19:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:34.968 [2024-07-15 18:19:27.083139] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:34.968 [2024-07-15 18:19:27.083322] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:35.556 EAL: TSC is not safe to use in SMP mode 00:05:35.556 EAL: TSC is not invariant 00:05:35.556 [2024-07-15 18:19:27.714553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.556 [2024-07-15 18:19:27.837599] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:35.556 [2024-07-15 18:19:27.837656] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:35.556 [2024-07-15 18:19:27.837666] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:35.556 [2024-07-15 18:19:27.841147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.556 [2024-07-15 18:19:27.841063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.556 [2024-07-15 18:19:27.841142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.556 [2024-07-15 18:19:27.900613] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:35.556 [2024-07-15 18:19:27.900674] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:35.834 [2024-07-15 18:19:27.908593] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:35.834 [2024-07-15 18:19:27.908623] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:35.834 [2024-07-15 18:19:27.916609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:35.834 [2024-07-15 18:19:27.916635] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:35.834 [2024-07-15 18:19:27.916644] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:35.834 [2024-07-15 18:19:27.964623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:35.834 [2024-07-15 18:19:27.964682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.834 [2024-07-15 18:19:27.964693] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x39dd06836800 00:05:35.834 [2024-07-15 18:19:27.964702] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.834 [2024-07-15 18:19:27.965105] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.834 [2024-07-15 18:19:27.965133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:36.091 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.091 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:05:36.091 18:19:28 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:36.091 I/O targets: 00:05:36.091 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:36.091 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:36.091 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:36.091 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:36.091 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:36.091 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:36.091 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:36.091 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:36.091 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:36.091 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:36.091 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:36.091 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:36.091 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:36.091 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:36.091 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:36.091 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:36.091 00:05:36.091 00:05:36.091 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.091 http://cunit.sourceforge.net/ 00:05:36.091 00:05:36.091 00:05:36.091 Suite: bdevio tests on: AIO0 00:05:36.091 Test: blockdev write read block ...passed 00:05:36.091 Test: blockdev write zeroes read block ...passed 00:05:36.091 Test: blockdev write zeroes read no split ...passed 00:05:36.091 Test: blockdev write zeroes read split ...passed 00:05:36.091 Test: blockdev write zeroes read split partial ...passed 00:05:36.091 Test: blockdev reset ...passed 00:05:36.091 Test: blockdev write read 8 blocks ...passed 00:05:36.091 Test: blockdev write read size > 128k ...passed 00:05:36.091 Test: blockdev write read invalid size ...passed 00:05:36.091 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.091 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.091 Test: blockdev write read max offset ...passed 00:05:36.091 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.091 Test: blockdev writev readv 8 blocks ...passed 00:05:36.091 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.091 Test: blockdev writev readv block ...passed 00:05:36.091 Test: blockdev writev readv size > 128k ...passed 00:05:36.091 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.091 Test: blockdev comparev and writev ...passed 00:05:36.091 Test: blockdev nvme passthru rw ...passed 00:05:36.091 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.091 Test: blockdev nvme admin passthru ...passed 00:05:36.091 Test: blockdev copy ...passed 00:05:36.091 Suite: bdevio tests on: raid1 00:05:36.091 Test: blockdev write read block ...passed 00:05:36.091 Test: blockdev write zeroes read block ...passed 00:05:36.091 Test: blockdev write zeroes read no split ...passed 00:05:36.091 Test: blockdev write zeroes read split ...passed 00:05:36.091 Test: blockdev write zeroes read split partial ...passed 00:05:36.091 Test: blockdev reset ...passed 00:05:36.091 Test: blockdev write read 8 blocks ...passed 00:05:36.091 Test: blockdev write read size > 128k ...passed 00:05:36.091 Test: blockdev write read invalid size ...passed 00:05:36.091 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.091 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.091 Test: blockdev write read max offset ...passed 00:05:36.091 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.091 Test: blockdev writev readv 8 blocks ...passed 00:05:36.091 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.091 Test: blockdev writev readv block ...passed 00:05:36.091 Test: blockdev writev readv size > 128k ...passed 00:05:36.091 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.091 Test: blockdev comparev and writev ...passed 00:05:36.091 Test: blockdev nvme passthru rw ...passed 00:05:36.091 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.091 Test: blockdev nvme admin passthru ...passed 00:05:36.091 Test: blockdev copy ...passed 00:05:36.091 Suite: bdevio tests on: concat0 00:05:36.091 Test: blockdev write read block ...passed 00:05:36.091 Test: blockdev write zeroes read block ...passed 00:05:36.091 Test: blockdev write zeroes read no split ...passed 00:05:36.091 Test: blockdev write zeroes read split ...passed 00:05:36.091 Test: blockdev write zeroes read split partial ...passed 00:05:36.091 Test: blockdev reset ...passed 00:05:36.091 Test: blockdev write read 8 blocks ...passed 00:05:36.091 Test: blockdev write read size > 128k ...passed 00:05:36.091 Test: blockdev write read invalid size ...passed 00:05:36.091 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.091 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.091 Test: blockdev write read max offset ...passed 00:05:36.091 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.091 Test: blockdev writev readv 8 blocks ...passed 00:05:36.091 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.091 Test: blockdev writev readv block ...passed 00:05:36.091 Test: blockdev writev readv size > 128k ...passed 00:05:36.091 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.091 Test: blockdev comparev and writev ...passed 00:05:36.091 Test: blockdev nvme passthru rw ...passed 00:05:36.091 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.091 Test: blockdev nvme admin passthru ...passed 00:05:36.091 Test: blockdev copy ...passed 00:05:36.091 Suite: bdevio tests on: raid0 00:05:36.350 Test: blockdev write read block ...passed 00:05:36.350 Test: blockdev write zeroes read block ...passed 00:05:36.350 Test: blockdev write zeroes read no split ...passed 00:05:36.350 Test: blockdev write zeroes read split ...passed 00:05:36.350 Test: blockdev write zeroes read split partial ...passed 00:05:36.350 Test: blockdev reset ...passed 00:05:36.350 Test: blockdev write read 8 blocks ...passed 00:05:36.350 Test: blockdev write read size > 128k ...passed 00:05:36.350 Test: blockdev write read invalid size ...passed 00:05:36.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.350 Test: blockdev write read max offset ...passed 00:05:36.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.350 Test: blockdev writev readv 8 blocks ...passed 00:05:36.350 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.350 Test: blockdev writev readv block ...passed 00:05:36.350 Test: blockdev writev readv size > 128k ...passed 00:05:36.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.350 Test: blockdev comparev and writev ...passed 00:05:36.350 Test: blockdev nvme passthru rw ...passed 00:05:36.350 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.350 Test: blockdev nvme admin passthru ...passed 00:05:36.350 Test: blockdev copy ...passed 00:05:36.350 Suite: bdevio tests on: TestPT 00:05:36.350 Test: blockdev write read block ...passed 00:05:36.350 Test: blockdev write zeroes read block ...passed 00:05:36.350 Test: blockdev write zeroes read no split ...passed 00:05:36.350 Test: blockdev write zeroes read split ...passed 00:05:36.350 Test: blockdev write zeroes read split partial ...passed 00:05:36.350 Test: blockdev reset ...passed 00:05:36.350 Test: blockdev write read 8 blocks ...passed 00:05:36.350 Test: blockdev write read size > 128k ...passed 00:05:36.350 Test: blockdev write read invalid size ...passed 00:05:36.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.350 Test: blockdev write read max offset ...passed 00:05:36.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.350 Test: blockdev writev readv 8 blocks ...passed 00:05:36.350 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.350 Test: blockdev writev readv block ...passed 00:05:36.350 Test: blockdev writev readv size > 128k ...passed 00:05:36.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.350 Test: blockdev comparev and writev ...passed 00:05:36.350 Test: blockdev nvme passthru rw ...passed 00:05:36.350 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.350 Test: blockdev nvme admin passthru ...passed 00:05:36.350 Test: blockdev copy ...passed 00:05:36.350 Suite: bdevio tests on: Malloc2p7 00:05:36.350 Test: blockdev write read block ...passed 00:05:36.350 Test: blockdev write zeroes read block ...passed 00:05:36.350 Test: blockdev write zeroes read no split ...passed 00:05:36.350 Test: blockdev write zeroes read split ...passed 00:05:36.350 Test: blockdev write zeroes read split partial ...passed 00:05:36.350 Test: blockdev reset ...passed 00:05:36.350 Test: blockdev write read 8 blocks ...passed 00:05:36.350 Test: blockdev write read size > 128k ...passed 00:05:36.350 Test: blockdev write read invalid size ...passed 00:05:36.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.350 Test: blockdev write read max offset ...passed 00:05:36.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.350 Test: blockdev writev readv 8 blocks ...passed 00:05:36.350 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.350 Test: blockdev writev readv block ...passed 00:05:36.350 Test: blockdev writev readv size > 128k ...passed 00:05:36.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.350 Test: blockdev comparev and writev ...passed 00:05:36.350 Test: blockdev nvme passthru rw ...passed 00:05:36.350 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.350 Test: blockdev nvme admin passthru ...passed 00:05:36.350 Test: blockdev copy ...passed 00:05:36.350 Suite: bdevio tests on: Malloc2p6 00:05:36.350 Test: blockdev write read block ...passed 00:05:36.350 Test: blockdev write zeroes read block ...passed 00:05:36.350 Test: blockdev write zeroes read no split ...passed 00:05:36.350 Test: blockdev write zeroes read split ...passed 00:05:36.350 Test: blockdev write zeroes read split partial ...passed 00:05:36.350 Test: blockdev reset ...passed 00:05:36.350 Test: blockdev write read 8 blocks ...passed 00:05:36.350 Test: blockdev write read size > 128k ...passed 00:05:36.350 Test: blockdev write read invalid size ...passed 00:05:36.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.350 Test: blockdev write read max offset ...passed 00:05:36.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.350 Test: blockdev writev readv 8 blocks ...passed 00:05:36.350 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.350 Test: blockdev writev readv block ...passed 00:05:36.350 Test: blockdev writev readv size > 128k ...passed 00:05:36.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.350 Test: blockdev comparev and writev ...passed 00:05:36.350 Test: blockdev nvme passthru rw ...passed 00:05:36.350 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.350 Test: blockdev nvme admin passthru ...passed 00:05:36.351 Test: blockdev copy ...passed 00:05:36.351 Suite: bdevio tests on: Malloc2p5 00:05:36.351 Test: blockdev write read block ...passed 00:05:36.351 Test: blockdev write zeroes read block ...passed 00:05:36.351 Test: blockdev write zeroes read no split ...passed 00:05:36.351 Test: blockdev write zeroes read split ...passed 00:05:36.351 Test: blockdev write zeroes read split partial ...passed 00:05:36.351 Test: blockdev reset ...passed 00:05:36.351 Test: blockdev write read 8 blocks ...passed 00:05:36.351 Test: blockdev write read size > 128k ...passed 00:05:36.351 Test: blockdev write read invalid size ...passed 00:05:36.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.351 Test: blockdev write read max offset ...passed 00:05:36.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.351 Test: blockdev writev readv 8 blocks ...passed 00:05:36.351 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.351 Test: blockdev writev readv block ...passed 00:05:36.351 Test: blockdev writev readv size > 128k ...passed 00:05:36.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.351 Test: blockdev comparev and writev ...passed 00:05:36.351 Test: blockdev nvme passthru rw ...passed 00:05:36.351 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.351 Test: blockdev nvme admin passthru ...passed 00:05:36.351 Test: blockdev copy ...passed 00:05:36.351 Suite: bdevio tests on: Malloc2p4 00:05:36.351 Test: blockdev write read block ...passed 00:05:36.351 Test: blockdev write zeroes read block ...passed 00:05:36.351 Test: blockdev write zeroes read no split ...passed 00:05:36.351 Test: blockdev write zeroes read split ...passed 00:05:36.351 Test: blockdev write zeroes read split partial ...passed 00:05:36.351 Test: blockdev reset ...passed 00:05:36.351 Test: blockdev write read 8 blocks ...passed 00:05:36.351 Test: blockdev write read size > 128k ...passed 00:05:36.351 Test: blockdev write read invalid size ...passed 00:05:36.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.351 Test: blockdev write read max offset ...passed 00:05:36.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.351 Test: blockdev writev readv 8 blocks ...passed 00:05:36.351 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.351 Test: blockdev writev readv block ...passed 00:05:36.351 Test: blockdev writev readv size > 128k ...passed 00:05:36.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.351 Test: blockdev comparev and writev ...passed 00:05:36.351 Test: blockdev nvme passthru rw ...passed 00:05:36.351 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.351 Test: blockdev nvme admin passthru ...passed 00:05:36.351 Test: blockdev copy ...passed 00:05:36.351 Suite: bdevio tests on: Malloc2p3 00:05:36.351 Test: blockdev write read block ...passed 00:05:36.351 Test: blockdev write zeroes read block ...passed 00:05:36.351 Test: blockdev write zeroes read no split ...passed 00:05:36.351 Test: blockdev write zeroes read split ...passed 00:05:36.351 Test: blockdev write zeroes read split partial ...passed 00:05:36.351 Test: blockdev reset ...passed 00:05:36.351 Test: blockdev write read 8 blocks ...passed 00:05:36.351 Test: blockdev write read size > 128k ...passed 00:05:36.351 Test: blockdev write read invalid size ...passed 00:05:36.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.351 Test: blockdev write read max offset ...passed 00:05:36.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.351 Test: blockdev writev readv 8 blocks ...passed 00:05:36.351 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.351 Test: blockdev writev readv block ...passed 00:05:36.351 Test: blockdev writev readv size > 128k ...passed 00:05:36.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.351 Test: blockdev comparev and writev ...passed 00:05:36.351 Test: blockdev nvme passthru rw ...passed 00:05:36.351 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.351 Test: blockdev nvme admin passthru ...passed 00:05:36.351 Test: blockdev copy ...passed 00:05:36.351 Suite: bdevio tests on: Malloc2p2 00:05:36.351 Test: blockdev write read block ...passed 00:05:36.351 Test: blockdev write zeroes read block ...passed 00:05:36.351 Test: blockdev write zeroes read no split ...passed 00:05:36.351 Test: blockdev write zeroes read split ...passed 00:05:36.351 Test: blockdev write zeroes read split partial ...passed 00:05:36.351 Test: blockdev reset ...passed 00:05:36.351 Test: blockdev write read 8 blocks ...passed 00:05:36.351 Test: blockdev write read size > 128k ...passed 00:05:36.351 Test: blockdev write read invalid size ...passed 00:05:36.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.351 Test: blockdev write read max offset ...passed 00:05:36.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.351 Test: blockdev writev readv 8 blocks ...passed 00:05:36.351 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.351 Test: blockdev writev readv block ...passed 00:05:36.351 Test: blockdev writev readv size > 128k ...passed 00:05:36.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.351 Test: blockdev comparev and writev ...passed 00:05:36.351 Test: blockdev nvme passthru rw ...passed 00:05:36.351 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.351 Test: blockdev nvme admin passthru ...passed 00:05:36.351 Test: blockdev copy ...passed 00:05:36.351 Suite: bdevio tests on: Malloc2p1 00:05:36.351 Test: blockdev write read block ...passed 00:05:36.351 Test: blockdev write zeroes read block ...passed 00:05:36.351 Test: blockdev write zeroes read no split ...passed 00:05:36.351 Test: blockdev write zeroes read split ...passed 00:05:36.351 Test: blockdev write zeroes read split partial ...passed 00:05:36.351 Test: blockdev reset ...passed 00:05:36.351 Test: blockdev write read 8 blocks ...passed 00:05:36.351 Test: blockdev write read size > 128k ...passed 00:05:36.351 Test: blockdev write read invalid size ...passed 00:05:36.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.351 Test: blockdev write read max offset ...passed 00:05:36.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.351 Test: blockdev writev readv 8 blocks ...passed 00:05:36.351 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.351 Test: blockdev writev readv block ...passed 00:05:36.351 Test: blockdev writev readv size > 128k ...passed 00:05:36.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.351 Test: blockdev comparev and writev ...passed 00:05:36.351 Test: blockdev nvme passthru rw ...passed 00:05:36.351 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.351 Test: blockdev nvme admin passthru ...passed 00:05:36.351 Test: blockdev copy ...passed 00:05:36.351 Suite: bdevio tests on: Malloc2p0 00:05:36.351 Test: blockdev write read block ...passed 00:05:36.351 Test: blockdev write zeroes read block ...passed 00:05:36.351 Test: blockdev write zeroes read no split ...passed 00:05:36.351 Test: blockdev write zeroes read split ...passed 00:05:36.351 Test: blockdev write zeroes read split partial ...passed 00:05:36.351 Test: blockdev reset ...passed 00:05:36.351 Test: blockdev write read 8 blocks ...passed 00:05:36.351 Test: blockdev write read size > 128k ...passed 00:05:36.351 Test: blockdev write read invalid size ...passed 00:05:36.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.351 Test: blockdev write read max offset ...passed 00:05:36.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.351 Test: blockdev writev readv 8 blocks ...passed 00:05:36.351 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.351 Test: blockdev writev readv block ...passed 00:05:36.351 Test: blockdev writev readv size > 128k ...passed 00:05:36.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.351 Test: blockdev comparev and writev ...passed 00:05:36.351 Test: blockdev nvme passthru rw ...passed 00:05:36.351 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.351 Test: blockdev nvme admin passthru ...passed 00:05:36.351 Test: blockdev copy ...passed 00:05:36.351 Suite: bdevio tests on: Malloc1p1 00:05:36.351 Test: blockdev write read block ...passed 00:05:36.351 Test: blockdev write zeroes read block ...passed 00:05:36.351 Test: blockdev write zeroes read no split ...passed 00:05:36.351 Test: blockdev write zeroes read split ...passed 00:05:36.351 Test: blockdev write zeroes read split partial ...passed 00:05:36.351 Test: blockdev reset ...passed 00:05:36.351 Test: blockdev write read 8 blocks ...passed 00:05:36.351 Test: blockdev write read size > 128k ...passed 00:05:36.351 Test: blockdev write read invalid size ...passed 00:05:36.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.351 Test: blockdev write read max offset ...passed 00:05:36.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.351 Test: blockdev writev readv 8 blocks ...passed 00:05:36.351 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.351 Test: blockdev writev readv block ...passed 00:05:36.351 Test: blockdev writev readv size > 128k ...passed 00:05:36.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.351 Test: blockdev comparev and writev ...passed 00:05:36.351 Test: blockdev nvme passthru rw ...passed 00:05:36.351 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.351 Test: blockdev nvme admin passthru ...passed 00:05:36.351 Test: blockdev copy ...passed 00:05:36.351 Suite: bdevio tests on: Malloc1p0 00:05:36.351 Test: blockdev write read block ...passed 00:05:36.351 Test: blockdev write zeroes read block ...passed 00:05:36.351 Test: blockdev write zeroes read no split ...passed 00:05:36.351 Test: blockdev write zeroes read split ...passed 00:05:36.352 Test: blockdev write zeroes read split partial ...passed 00:05:36.352 Test: blockdev reset ...passed 00:05:36.352 Test: blockdev write read 8 blocks ...passed 00:05:36.352 Test: blockdev write read size > 128k ...passed 00:05:36.352 Test: blockdev write read invalid size ...passed 00:05:36.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.352 Test: blockdev write read max offset ...passed 00:05:36.352 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.352 Test: blockdev writev readv 8 blocks ...passed 00:05:36.352 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.352 Test: blockdev writev readv block ...passed 00:05:36.352 Test: blockdev writev readv size > 128k ...passed 00:05:36.352 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.352 Test: blockdev comparev and writev ...passed 00:05:36.352 Test: blockdev nvme passthru rw ...passed 00:05:36.352 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.352 Test: blockdev nvme admin passthru ...passed 00:05:36.352 Test: blockdev copy ...passed 00:05:36.352 Suite: bdevio tests on: Malloc0 00:05:36.352 Test: blockdev write read block ...passed 00:05:36.352 Test: blockdev write zeroes read block ...passed 00:05:36.352 Test: blockdev write zeroes read no split ...passed 00:05:36.352 Test: blockdev write zeroes read split ...passed 00:05:36.352 Test: blockdev write zeroes read split partial ...passed 00:05:36.352 Test: blockdev reset ...passed 00:05:36.352 Test: blockdev write read 8 blocks ...passed 00:05:36.352 Test: blockdev write read size > 128k ...passed 00:05:36.352 Test: blockdev write read invalid size ...passed 00:05:36.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:36.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:36.352 Test: blockdev write read max offset ...passed 00:05:36.352 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:36.352 Test: blockdev writev readv 8 blocks ...passed 00:05:36.352 Test: blockdev writev readv 30 x 1block ...passed 00:05:36.352 Test: blockdev writev readv block ...passed 00:05:36.352 Test: blockdev writev readv size > 128k ...passed 00:05:36.352 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:36.352 Test: blockdev comparev and writev ...passed 00:05:36.352 Test: blockdev nvme passthru rw ...passed 00:05:36.352 Test: blockdev nvme passthru vendor specific ...passed 00:05:36.352 Test: blockdev nvme admin passthru ...passed 00:05:36.352 Test: blockdev copy ...passed 00:05:36.352 00:05:36.352 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.352 suites 16 16 n/a 0 0 00:05:36.352 tests 368 368 368 0 0 00:05:36.352 asserts 2224 2224 2224 0 n/a 00:05:36.352 00:05:36.352 Elapsed time = 0.539 seconds 00:05:36.352 0 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 47757 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 47757 ']' 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 47757 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 47757 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:05:36.352 killing process with pid 47757 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47757' 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 47757 00:05:36.352 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 47757 00:05:36.608 18:19:28 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:05:36.608 00:05:36.608 real 0m1.875s 00:05:36.608 user 0m3.543s 00:05:36.608 sys 0m0.849s 00:05:36.608 ************************************ 00:05:36.608 END TEST bdev_bounds 00:05:36.608 ************************************ 00:05:36.608 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.608 18:19:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:36.865 18:19:28 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:36.865 18:19:28 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:36.865 18:19:28 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:36.865 18:19:28 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.865 18:19:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:36.865 ************************************ 00:05:36.865 START TEST bdev_nbd 00:05:36.865 ************************************ 00:05:36.865 18:19:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:36.865 18:19:28 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:05:36.865 18:19:28 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:05:36.865 18:19:28 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:05:36.865 00:05:36.865 real 0m0.005s 00:05:36.865 user 0m0.003s 00:05:36.865 sys 0m0.002s 00:05:36.865 18:19:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.865 18:19:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:36.865 ************************************ 00:05:36.865 END TEST bdev_nbd 00:05:36.865 ************************************ 00:05:36.865 18:19:29 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:36.865 18:19:29 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:05:36.865 18:19:29 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:05:36.865 18:19:29 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:05:36.865 18:19:29 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:05:36.865 18:19:29 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:36.865 18:19:29 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.865 18:19:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:36.865 ************************************ 00:05:36.865 START TEST bdev_fio 00:05:36.865 ************************************ 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:05:36.865 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:05:36.865 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.798 18:19:29 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:37.798 ************************************ 00:05:37.798 START TEST bdev_fio_rw_verify 00:05:37.798 ************************************ 00:05:37.798 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:37.798 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:37.798 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:37.798 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:37.798 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:37.798 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:37.798 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:37.799 18:19:29 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:37.799 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:37.799 fio-3.35 00:05:37.799 Starting 16 threads 00:05:38.365 EAL: TSC is not safe to use in SMP mode 00:05:38.365 EAL: TSC is not invariant 00:05:50.622 00:05:50.622 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101347: Mon Jul 15 18:19:41 2024 00:05:50.622 read: IOPS=231k, BW=902MiB/s (946MB/s)(9022MiB/10003msec) 00:05:50.622 slat (nsec): min=286, max=332866k, avg=4265.12, stdev=476068.52 00:05:50.622 clat (nsec): min=896, max=332911k, avg=45340.36, stdev=1430210.90 00:05:50.622 lat (usec): min=2, max=332914, avg=49.61, stdev=1507.40 00:05:50.622 clat percentiles (usec): 00:05:50.622 | 50.000th=[ 10], 99.000th=[ 709], 99.900th=[ 938], 00:05:50.622 | 99.990th=[ 94897], 99.999th=[143655] 00:05:50.622 write: IOPS=388k, BW=1516MiB/s (1590MB/s)(14.7GiB/9906msec); 0 zone resets 00:05:50.622 slat (nsec): min=552, max=543738k, avg=21600.69, stdev=970428.11 00:05:50.622 clat (nsec): min=839, max=1962.8M, avg=110307.84, stdev=3724005.33 00:05:50.622 lat (usec): min=12, max=1962.9k, avg=131.91, stdev=3848.95 00:05:50.622 clat percentiles (usec): 00:05:50.622 | 50.000th=[ 52], 99.000th=[ 676], 99.900th=[ 2638], 00:05:50.622 | 99.990th=[ 94897], 99.999th=[392168] 00:05:50.622 bw ( MiB/s): min= 553, max= 2426, per=99.83%, avg=1513.54, stdev=38.18, samples=297 00:05:50.622 iops : min=141644, max=621090, avg=387466.79, stdev=9773.06, samples=297 00:05:50.622 lat (nsec) : 1000=0.01% 00:05:50.622 lat (usec) : 2=0.04%, 4=11.16%, 10=15.72%, 20=23.07%, 50=16.59% 00:05:50.622 lat (usec) : 100=29.20%, 250=2.70%, 500=0.16%, 750=0.69%, 1000=0.50% 00:05:50.622 lat (msec) : 2=0.06%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:50.622 lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01% 00:05:50.622 cpu : usr=55.74%, sys=2.69%, ctx=823677, majf=0, minf=612 00:05:50.622 IO depths : 1=12.5%, 2=25.0%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:50.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:50.622 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:50.622 issued rwts: total=2309607,3844627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:05:50.622 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:50.622 00:05:50.622 Run status group 0 (all jobs): 00:05:50.622 READ: bw=902MiB/s (946MB/s), 902MiB/s-902MiB/s (946MB/s-946MB/s), io=9022MiB (9460MB), run=10003-10003msec 00:05:50.622 WRITE: bw=1516MiB/s (1590MB/s), 1516MiB/s-1516MiB/s (1590MB/s-1590MB/s), io=14.7GiB (15.7GB), run=9906-9906msec 00:05:50.622 00:05:50.622 real 0m12.430s 00:05:50.622 user 1m33.731s 00:05:50.622 sys 0m6.847s 00:05:50.622 18:19:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.622 ************************************ 00:05:50.622 END TEST bdev_fio_rw_verify 00:05:50.622 ************************************ 00:05:50.622 18:19:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:05:50.622 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:50.623 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c3e73e81-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c3e73e81-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0b6dee01-3e99-e05c-92a2-eb4fd01fbce8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0b6dee01-3e99-e05c-92a2-eb4fd01fbce8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6811a84d-08f2-9d5a-b41d-311276d378b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6811a84d-08f2-9d5a-b41d-311276d378b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b8eecc5e-9e1f-7f52-b268-2cb1a386f9f8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b8eecc5e-9e1f-7f52-b268-2cb1a386f9f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "2bba8f9f-b516-2556-983e-2d367e2ab900"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2bba8f9f-b516-2556-983e-2d367e2ab900",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2ddf6ded-c88a-e955-9ded-66a6a531b73f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2ddf6ded-c88a-e955-9ded-66a6a531b73f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "45806506-5974-ca53-b5b5-7b4a8ccbcd74"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "45806506-5974-ca53-b5b5-7b4a8ccbcd74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a9139bbf-3c51-115e-9e07-c550694bc2e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9139bbf-3c51-115e-9e07-c550694bc2e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "42975f5c-caa0-df5f-9117-a76b20453630"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "42975f5c-caa0-df5f-9117-a76b20453630",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "688791e4-4a42-2657-bdca-1fbb61f09dda"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "688791e4-4a42-2657-bdca-1fbb61f09dda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "df75ff6f-6ad2-4f54-a0fa-7b8ed2b293fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "df75ff6f-6ad2-4f54-a0fa-7b8ed2b293fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "ed480dbb-afde-b053-a7c0-90c01b595a72"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ed480dbb-afde-b053-a7c0-90c01b595a72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c3f4b999-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3f4b999-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f4b999-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c3ec1ff5-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "c3ed5872-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c3ee90ff-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c3efc96f-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c3f71d26-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c3f71d26-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f71d26-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c3f101f2-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c3f23a83-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c3ffaa5a-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c3ffaa5a-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:50.623 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:05:50.623 Malloc1p0 00:05:50.623 Malloc1p1 00:05:50.623 Malloc2p0 00:05:50.623 Malloc2p1 00:05:50.623 Malloc2p2 00:05:50.623 Malloc2p3 00:05:50.623 Malloc2p4 00:05:50.623 Malloc2p5 00:05:50.623 Malloc2p6 00:05:50.623 Malloc2p7 00:05:50.623 TestPT 00:05:50.623 raid0 00:05:50.623 concat0 ]] 00:05:50.623 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c3e73e81-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c3e73e81-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0b6dee01-3e99-e05c-92a2-eb4fd01fbce8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0b6dee01-3e99-e05c-92a2-eb4fd01fbce8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6811a84d-08f2-9d5a-b41d-311276d378b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6811a84d-08f2-9d5a-b41d-311276d378b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b8eecc5e-9e1f-7f52-b268-2cb1a386f9f8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b8eecc5e-9e1f-7f52-b268-2cb1a386f9f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "2bba8f9f-b516-2556-983e-2d367e2ab900"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2bba8f9f-b516-2556-983e-2d367e2ab900",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2ddf6ded-c88a-e955-9ded-66a6a531b73f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2ddf6ded-c88a-e955-9ded-66a6a531b73f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "45806506-5974-ca53-b5b5-7b4a8ccbcd74"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "45806506-5974-ca53-b5b5-7b4a8ccbcd74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a9139bbf-3c51-115e-9e07-c550694bc2e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9139bbf-3c51-115e-9e07-c550694bc2e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "42975f5c-caa0-df5f-9117-a76b20453630"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "42975f5c-caa0-df5f-9117-a76b20453630",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "688791e4-4a42-2657-bdca-1fbb61f09dda"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "688791e4-4a42-2657-bdca-1fbb61f09dda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "df75ff6f-6ad2-4f54-a0fa-7b8ed2b293fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "df75ff6f-6ad2-4f54-a0fa-7b8ed2b293fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "ed480dbb-afde-b053-a7c0-90c01b595a72"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ed480dbb-afde-b053-a7c0-90c01b595a72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c3f4b999-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3f4b999-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f4b999-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c3ec1ff5-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "c3ed5872-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f5e4ba-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c3ee90ff-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c3efc96f-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c3f71d26-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c3f71d26-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3f71d26-42d6-11ef-9ade-d5fc5159efa5",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c3f101f2-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c3f23a83-42d6-11ef-9ade-d5fc5159efa5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c3ffaa5a-42d6-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c3ffaa5a-42d6-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.625 18:19:42 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:50.625 ************************************ 00:05:50.625 START TEST bdev_fio_trim 00:05:50.625 ************************************ 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:50.625 18:19:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:50.625 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:50.625 fio-3.35 00:05:50.625 Starting 14 threads 00:05:50.885 EAL: TSC is not safe to use in SMP mode 00:05:50.885 EAL: TSC is not invariant 00:06:03.089 00:06:03.089 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101366: Mon Jul 15 18:19:53 2024 00:06:03.089 write: IOPS=2301k, BW=8989MiB/s (9426MB/s)(87.8GiB/10002msec); 0 zone resets 00:06:03.089 slat (nsec): min=276, max=1212.6M, avg=1485.33, stdev=398635.96 00:06:03.089 clat (nsec): min=1383, max=1220.5M, avg=16667.38, stdev=927578.83 00:06:03.089 lat (usec): min=2, max=1220.5k, avg=18.15, stdev=1009.61 00:06:03.089 clat percentiles (usec): 00:06:03.089 | 50.000th=[ 7], 99.000th=[ 18], 99.900th=[ 955], 99.990th=[ 7963], 00:06:03.089 | 99.999th=[94897] 00:06:03.089 bw ( MiB/s): min= 3544, max=14306, per=100.00%, avg=9251.30, stdev=261.89, samples=259 00:06:03.089 iops : min=907420, max=3662344, avg=2368333.29, stdev=67043.34, samples=259 00:06:03.089 trim: IOPS=2301k, BW=8989MiB/s (9426MB/s)(87.8GiB/10002msec); 0 zone resets 00:06:03.089 slat (nsec): min=600, max=388453k, avg=1659.82, stdev=217040.95 00:06:03.089 clat (nsec): min=418, max=1220.5M, avg=11971.77, stdev=857681.82 00:06:03.089 lat (nsec): min=1694, max=1220.5M, avg=13631.59, stdev=884724.33 00:06:03.089 clat percentiles (usec): 00:06:03.089 | 50.000th=[ 8], 99.000th=[ 18], 99.900th=[ 25], 99.990th=[ 52], 00:06:03.089 | 99.999th=[94897] 00:06:03.089 bw ( MiB/s): min= 3544, max=14306, per=100.00%, avg=9251.31, stdev=261.89, samples=259 00:06:03.089 iops : min=907404, max=3662342, avg=2368335.09, stdev=67043.38, samples=259 00:06:03.089 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:06:03.089 lat (usec) : 2=0.11%, 4=23.60%, 10=56.02%, 20=19.67%, 50=0.34% 00:06:03.089 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.22% 00:06:03.089 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:06:03.089 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01% 00:06:03.089 cpu : usr=63.52%, sys=3.10%, ctx=847453, majf=0, minf=0 00:06:03.089 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:06:03.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:03.089 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:03.089 issued rwts: total=0,23016917,23016922,0 short=0,0,0,0 dropped=0,0,0,0 00:06:03.089 latency : target=0, window=0, percentile=100.00%, depth=8 00:06:03.089 00:06:03.089 Run status group 0 (all jobs): 00:06:03.089 WRITE: bw=8989MiB/s (9426MB/s), 8989MiB/s-8989MiB/s (9426MB/s-9426MB/s), io=87.8GiB (94.3GB), run=10002-10002msec 00:06:03.089 TRIM: bw=8989MiB/s (9426MB/s), 8989MiB/s-8989MiB/s (9426MB/s-9426MB/s), io=87.8GiB (94.3GB), run=10002-10002msec 00:06:03.089 00:06:03.089 real 0m12.575s 00:06:03.089 user 1m34.928s 00:06:03.089 sys 0m7.329s 00:06:03.089 18:19:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.089 18:19:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:06:03.089 ************************************ 00:06:03.089 END TEST bdev_fio_trim 00:06:03.089 ************************************ 00:06:03.089 18:19:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:06:03.089 18:19:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:06:03.089 18:19:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:03.089 18:19:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:06:03.089 /home/vagrant/spdk_repo/spdk 00:06:03.089 18:19:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:06:03.089 00:06:03.089 real 0m26.022s 00:06:03.089 user 3m9.040s 00:06:03.089 sys 0m14.778s 00:06:03.089 ************************************ 00:06:03.089 END TEST bdev_fio 00:06:03.089 ************************************ 00:06:03.089 18:19:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.089 18:19:55 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:06:03.089 18:19:55 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:03.089 18:19:55 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:03.089 18:19:55 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:03.089 18:19:55 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:06:03.089 18:19:55 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.089 18:19:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:03.089 ************************************ 00:06:03.089 START TEST bdev_verify 00:06:03.089 ************************************ 00:06:03.089 18:19:55 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:03.089 [2024-07-15 18:19:55.113036] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:03.089 [2024-07-15 18:19:55.113285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:03.655 EAL: TSC is not safe to use in SMP mode 00:06:03.655 EAL: TSC is not invariant 00:06:03.655 [2024-07-15 18:19:55.715169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.655 [2024-07-15 18:19:55.821191] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:03.655 [2024-07-15 18:19:55.821246] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:03.655 [2024-07-15 18:19:55.824124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.655 [2024-07-15 18:19:55.824115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.655 [2024-07-15 18:19:55.882555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:03.655 [2024-07-15 18:19:55.882609] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:03.655 [2024-07-15 18:19:55.890531] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:03.655 [2024-07-15 18:19:55.890556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:03.655 [2024-07-15 18:19:55.898549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:03.655 [2024-07-15 18:19:55.898573] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:03.655 [2024-07-15 18:19:55.898582] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:03.655 [2024-07-15 18:19:55.946555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:03.655 [2024-07-15 18:19:55.946610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.655 [2024-07-15 18:19:55.946621] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1da8d5e36800 00:06:03.655 [2024-07-15 18:19:55.946629] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.655 [2024-07-15 18:19:55.947011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.655 [2024-07-15 18:19:55.947033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:03.928 Running I/O for 5 seconds... 00:06:09.230 00:06:09.230 Latency(us) 00:06:09.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:09.230 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x1000 00:06:09.230 Malloc0 : 5.04 7482.99 29.23 0.00 0.00 17081.40 65.16 45755.99 00:06:09.230 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x1000 length 0x1000 00:06:09.230 Malloc0 : 5.05 132.09 0.52 0.00 0.00 968084.14 422.63 1121021.85 00:06:09.230 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x800 00:06:09.230 Malloc1p0 : 5.03 5470.80 21.37 0.00 0.00 23384.20 309.06 30265.68 00:06:09.230 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x800 length 0x800 00:06:09.230 Malloc1p0 : 5.02 5917.85 23.12 0.00 0.00 21617.44 309.06 24427.03 00:06:09.230 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x800 00:06:09.230 Malloc1p1 : 5.03 5470.47 21.37 0.00 0.00 23380.78 329.54 28240.03 00:06:09.230 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x800 length 0x800 00:06:09.230 Malloc1p1 : 5.02 5917.46 23.12 0.00 0.00 21614.63 335.13 23950.40 00:06:09.230 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x200 00:06:09.230 Malloc2p0 : 5.03 5470.04 21.37 0.00 0.00 23377.73 299.75 26691.00 00:06:09.230 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x200 length 0x200 00:06:09.230 Malloc2p0 : 5.02 5917.04 23.11 0.00 0.00 21611.12 312.79 23354.62 00:06:09.230 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x200 00:06:09.230 Malloc2p1 : 5.03 5469.68 21.37 0.00 0.00 23374.23 309.06 25141.97 00:06:09.230 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x200 length 0x200 00:06:09.230 Malloc2p1 : 5.02 5916.69 23.11 0.00 0.00 21608.69 310.92 22878.00 00:06:09.230 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x200 00:06:09.230 Malloc2p2 : 5.03 5469.39 21.36 0.00 0.00 23371.49 303.48 24188.72 00:06:09.230 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x200 length 0x200 00:06:09.230 Malloc2p2 : 5.02 5916.29 23.11 0.00 0.00 21605.36 314.65 22401.37 00:06:09.230 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x200 00:06:09.230 Malloc2p3 : 5.03 5468.88 21.36 0.00 0.00 23368.80 314.65 24069.56 00:06:09.230 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x200 length 0x200 00:06:09.230 Malloc2p3 : 5.02 5915.93 23.11 0.00 0.00 21602.36 327.68 21924.75 00:06:09.230 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x200 00:06:09.230 Malloc2p4 : 5.03 5468.59 21.36 0.00 0.00 23365.51 299.75 25380.28 00:06:09.230 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x200 length 0x200 00:06:09.230 Malloc2p4 : 5.02 5915.56 23.11 0.00 0.00 21599.34 310.92 21567.28 00:06:09.230 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x200 00:06:09.230 Malloc2p5 : 5.03 5468.29 21.36 0.00 0.00 23362.30 299.75 26333.53 00:06:09.230 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x200 length 0x200 00:06:09.230 Malloc2p5 : 5.02 5915.22 23.11 0.00 0.00 21596.29 305.34 21209.81 00:06:09.230 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x200 00:06:09.230 Malloc2p6 : 5.03 5468.00 21.36 0.00 0.00 23359.12 299.75 27644.25 00:06:09.230 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x200 length 0x200 00:06:09.230 Malloc2p6 : 5.02 5914.84 23.10 0.00 0.00 21593.51 307.20 21209.81 00:06:09.230 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x200 00:06:09.230 Malloc2p7 : 5.03 5467.72 21.36 0.00 0.00 23355.61 322.09 28597.50 00:06:09.230 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x200 length 0x200 00:06:09.230 Malloc2p7 : 5.02 5914.51 23.10 0.00 0.00 21590.28 333.27 20018.25 00:06:09.230 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x1000 00:06:09.230 TestPT : 5.03 5447.09 21.28 0.00 0.00 23419.81 1020.28 28597.50 00:06:09.230 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x1000 length 0x1000 00:06:09.230 TestPT : 5.04 5324.83 20.80 0.00 0.00 23947.17 1280.93 71493.74 00:06:09.230 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x2000 00:06:09.230 raid0 : 5.03 5467.29 21.36 0.00 0.00 23346.99 310.92 28240.03 00:06:09.230 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x2000 length 0x2000 00:06:09.230 raid0 : 5.02 5913.94 23.10 0.00 0.00 21582.39 322.09 19303.31 00:06:09.230 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x2000 00:06:09.230 concat0 : 5.03 5466.98 21.36 0.00 0.00 23343.56 314.65 29074.12 00:06:09.230 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x2000 length 0x2000 00:06:09.230 concat0 : 5.02 5913.62 23.10 0.00 0.00 21579.49 312.79 19779.93 00:06:09.230 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x0 length 0x1000 00:06:09.230 raid1 : 5.03 5466.72 21.35 0.00 0.00 23339.88 381.67 30742.31 00:06:09.230 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.230 Verification LBA range: start 0x1000 length 0x1000 00:06:09.230 raid1 : 5.02 5913.14 23.10 0.00 0.00 21576.33 381.67 20614.03 00:06:09.230 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.231 Verification LBA range: start 0x0 length 0x4e2 00:06:09.231 AIO0 : 5.11 810.37 3.17 0.00 0.00 156772.42 1064.96 259283.96 00:06:09.231 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.231 Verification LBA range: start 0x4e2 length 0x4e2 00:06:09.231 AIO0 : 5.11 821.41 3.21 0.00 0.00 154726.91 12273.09 425149.44 00:06:09.231 =================================================================================================================== 00:06:09.231 Total : 168013.73 656.30 0.00 0.00 24348.53 65.16 1121021.85 00:06:09.231 00:06:09.231 real 0m6.374s 00:06:09.231 user 0m10.264s 00:06:09.231 sys 0m0.700s 00:06:09.231 18:20:01 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.231 18:20:01 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 ************************************ 00:06:09.231 END TEST bdev_verify 00:06:09.231 ************************************ 00:06:09.231 18:20:01 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:09.231 18:20:01 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:09.231 18:20:01 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:06:09.231 18:20:01 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.231 18:20:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 ************************************ 00:06:09.231 START TEST bdev_verify_big_io 00:06:09.231 ************************************ 00:06:09.231 18:20:01 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:09.231 [2024-07-15 18:20:01.537624] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:09.231 [2024-07-15 18:20:01.537886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:09.839 EAL: TSC is not safe to use in SMP mode 00:06:09.839 EAL: TSC is not invariant 00:06:09.839 [2024-07-15 18:20:02.143664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.097 [2024-07-15 18:20:02.253812] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:10.097 [2024-07-15 18:20:02.253901] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:10.097 [2024-07-15 18:20:02.256728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.097 [2024-07-15 18:20:02.256719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.097 [2024-07-15 18:20:02.316000] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:10.097 [2024-07-15 18:20:02.316062] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:10.097 [2024-07-15 18:20:02.323988] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:10.097 [2024-07-15 18:20:02.324018] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:10.097 [2024-07-15 18:20:02.332009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:10.097 [2024-07-15 18:20:02.332048] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:10.097 [2024-07-15 18:20:02.332059] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:10.097 [2024-07-15 18:20:02.380007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:10.097 [2024-07-15 18:20:02.380058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.097 [2024-07-15 18:20:02.380070] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5eb3a836800 00:06:10.097 [2024-07-15 18:20:02.380078] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.097 [2024-07-15 18:20:02.380489] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.097 [2024-07-15 18:20:02.380518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:10.356 [2024-07-15 18:20:02.481858] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.482128] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.482357] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.482581] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.482796] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.483019] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.483241] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.483456] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.483669] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.483893] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.484121] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.484354] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.484591] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.484837] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.485062] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.485286] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:10.356 [2024-07-15 18:20:02.487536] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:10.356 [2024-07-15 18:20:02.487798] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:10.356 Running I/O for 5 seconds... 00:06:15.622 00:06:15.622 Latency(us) 00:06:15.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:15.622 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x100 00:06:15.622 Malloc0 : 5.05 4003.45 250.22 0.00 0.00 31881.77 86.57 89128.86 00:06:15.622 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x100 length 0x100 00:06:15.622 Malloc0 : 5.06 3667.91 229.24 0.00 0.00 34804.72 88.90 109623.73 00:06:15.622 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x80 00:06:15.622 Malloc1p0 : 5.09 518.17 32.39 0.00 0.00 245547.89 487.80 308852.96 00:06:15.622 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x80 length 0x80 00:06:15.622 Malloc1p0 : 5.08 1738.86 108.68 0.00 0.00 73233.75 1057.51 138221.23 00:06:15.622 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x80 00:06:15.622 Malloc1p1 : 5.10 518.15 32.38 0.00 0.00 245110.83 420.77 301226.96 00:06:15.622 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x80 length 0x80 00:06:15.622 Malloc1p1 : 5.09 481.15 30.07 0.00 0.00 264454.17 392.84 306946.46 00:06:15.622 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x20 00:06:15.622 Malloc2p0 : 5.06 502.31 31.39 0.00 0.00 63234.65 258.79 107717.24 00:06:15.622 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x20 length 0x20 00:06:15.622 Malloc2p0 : 5.07 463.74 28.98 0.00 0.00 68522.26 253.21 101044.49 00:06:15.622 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x20 00:06:15.622 Malloc2p1 : 5.06 502.28 31.39 0.00 0.00 63203.07 256.93 106763.99 00:06:15.622 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x20 length 0x20 00:06:15.622 Malloc2p1 : 5.07 463.71 28.98 0.00 0.00 68501.16 273.69 100091.24 00:06:15.622 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x20 00:06:15.622 Malloc2p2 : 5.07 502.26 31.39 0.00 0.00 63180.15 273.69 106287.36 00:06:15.622 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x20 length 0x20 00:06:15.622 Malloc2p2 : 5.07 463.69 28.98 0.00 0.00 68468.21 262.52 99137.99 00:06:15.622 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x20 00:06:15.622 Malloc2p3 : 5.07 502.23 31.39 0.00 0.00 63157.34 262.52 105334.11 00:06:15.622 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x20 length 0x20 00:06:15.622 Malloc2p3 : 5.07 463.66 28.98 0.00 0.00 68445.17 253.21 98184.74 00:06:15.622 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x20 00:06:15.622 Malloc2p4 : 5.07 502.20 31.39 0.00 0.00 63130.28 253.21 104380.86 00:06:15.622 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x20 length 0x20 00:06:15.622 Malloc2p4 : 5.07 463.64 28.98 0.00 0.00 68423.15 253.21 97708.11 00:06:15.622 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x20 00:06:15.622 Malloc2p5 : 5.07 502.18 31.39 0.00 0.00 63102.15 258.79 103427.61 00:06:15.622 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x20 length 0x20 00:06:15.622 Malloc2p5 : 5.07 463.62 28.98 0.00 0.00 68397.18 260.65 96754.86 00:06:15.622 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:15.622 Verification LBA range: start 0x0 length 0x20 00:06:15.622 Malloc2p6 : 5.07 502.15 31.38 0.00 0.00 63075.46 253.21 102474.36 00:06:15.622 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x20 length 0x20 00:06:15.623 Malloc2p6 : 5.07 463.60 28.97 0.00 0.00 68374.69 253.21 95801.61 00:06:15.623 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x0 length 0x20 00:06:15.623 Malloc2p7 : 5.07 504.66 31.54 0.00 0.00 62762.89 251.35 101997.74 00:06:15.623 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x20 length 0x20 00:06:15.623 Malloc2p7 : 5.07 463.57 28.97 0.00 0.00 68333.39 253.21 94848.36 00:06:15.623 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x0 length 0x100 00:06:15.623 TestPT : 5.14 516.51 32.28 0.00 0.00 243444.90 3619.37 263096.96 00:06:15.623 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x100 length 0x100 00:06:15.623 TestPT : 5.22 277.68 17.35 0.00 0.00 452655.70 6523.80 480437.93 00:06:15.623 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x0 length 0x200 00:06:15.623 raid0 : 5.10 521.22 32.58 0.00 0.00 242233.94 396.57 280255.46 00:06:15.623 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x200 length 0x200 00:06:15.623 raid0 : 5.09 481.13 30.07 0.00 0.00 262731.33 383.53 285974.96 00:06:15.623 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x0 length 0x200 00:06:15.623 concat0 : 5.09 524.55 32.78 0.00 0.00 240370.67 374.23 272629.46 00:06:15.623 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x200 length 0x200 00:06:15.623 concat0 : 5.09 484.19 30.26 0.00 0.00 260755.46 379.81 278348.96 00:06:15.623 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x0 length 0x100 00:06:15.623 raid1 : 5.09 524.53 32.78 0.00 0.00 239932.65 700.04 263096.96 00:06:15.623 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x100 length 0x100 00:06:15.623 raid1 : 5.09 484.17 30.26 0.00 0.00 260269.78 444.97 268816.46 00:06:15.623 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x0 length 0x4e 00:06:15.623 AIO0 : 5.09 534.48 33.41 0.00 0.00 143437.98 415.19 160145.98 00:06:15.623 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:06:15.623 Verification LBA range: start 0x4e length 0x4e 00:06:15.623 AIO0 : 5.09 480.93 30.06 0.00 0.00 159468.95 277.41 162052.48 00:06:15.623 =================================================================================================================== 00:06:15.623 Total : 23486.59 1467.91 0.00 0.00 103963.95 86.57 480437.93 00:06:15.882 00:06:15.882 real 0m6.517s 00:06:15.882 user 0m11.454s 00:06:15.882 sys 0m0.683s 00:06:15.882 18:20:08 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.882 18:20:08 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:15.882 ************************************ 00:06:15.882 END TEST bdev_verify_big_io 00:06:15.882 ************************************ 00:06:15.882 18:20:08 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:15.882 18:20:08 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:15.882 18:20:08 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:15.882 18:20:08 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.882 18:20:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:15.882 ************************************ 00:06:15.882 START TEST bdev_write_zeroes 00:06:15.882 ************************************ 00:06:15.882 18:20:08 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:15.882 [2024-07-15 18:20:08.100130] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:15.882 [2024-07-15 18:20:08.100346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:16.449 EAL: TSC is not safe to use in SMP mode 00:06:16.449 EAL: TSC is not invariant 00:06:16.449 [2024-07-15 18:20:08.674451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.449 [2024-07-15 18:20:08.779496] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:16.449 [2024-07-15 18:20:08.781674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.707 [2024-07-15 18:20:08.840148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:16.707 [2024-07-15 18:20:08.840227] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:16.707 [2024-07-15 18:20:08.848135] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:16.707 [2024-07-15 18:20:08.848166] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:16.707 [2024-07-15 18:20:08.856184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:16.707 [2024-07-15 18:20:08.856210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:16.707 [2024-07-15 18:20:08.856218] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:16.707 [2024-07-15 18:20:08.904156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:16.707 [2024-07-15 18:20:08.904215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.707 [2024-07-15 18:20:08.904226] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f479036800 00:06:16.707 [2024-07-15 18:20:08.904235] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.708 [2024-07-15 18:20:08.904629] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.708 [2024-07-15 18:20:08.904655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:16.708 Running I/O for 1 seconds... 00:06:18.124 00:06:18.124 Latency(us) 00:06:18.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.124 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc0 : 1.01 30421.76 118.83 0.00 0.00 4206.70 195.49 7983.47 00:06:18.124 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc1p0 : 1.01 30418.13 118.82 0.00 0.00 4205.22 217.83 7685.58 00:06:18.124 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc1p1 : 1.01 30414.62 118.81 0.00 0.00 4203.70 219.69 7447.26 00:06:18.124 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc2p0 : 1.01 30411.39 118.79 0.00 0.00 4202.34 180.60 7417.48 00:06:18.124 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc2p1 : 1.01 30408.27 118.78 0.00 0.00 4200.90 188.04 7208.95 00:06:18.124 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc2p2 : 1.01 30404.01 118.77 0.00 0.00 4199.53 182.46 6940.85 00:06:18.124 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc2p3 : 1.01 30400.84 118.75 0.00 0.00 4198.28 183.39 6762.12 00:06:18.124 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc2p4 : 1.01 30447.89 118.94 0.00 0.00 4189.92 181.53 6583.38 00:06:18.124 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc2p5 : 1.01 30445.02 118.93 0.00 0.00 4188.35 180.60 6345.07 00:06:18.124 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc2p6 : 1.01 30441.46 118.91 0.00 0.00 4187.76 181.53 6106.76 00:06:18.124 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 Malloc2p7 : 1.01 30438.57 118.90 0.00 0.00 4185.72 183.39 5868.44 00:06:18.124 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 TestPT : 1.01 30435.73 118.89 0.00 0.00 4184.85 184.32 5719.50 00:06:18.124 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 raid0 : 1.01 30431.72 118.87 0.00 0.00 4183.06 268.10 5421.61 00:06:18.124 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 concat0 : 1.01 30428.25 118.86 0.00 0.00 4180.68 273.69 5272.66 00:06:18.124 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 raid1 : 1.01 30421.65 118.83 0.00 0.00 4178.89 452.42 4825.83 00:06:18.124 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.124 AIO0 : 1.05 3185.48 12.44 0.00 0.00 39062.71 484.07 132501.73 00:06:18.124 =================================================================================================================== 00:06:18.124 Total : 459554.79 1795.14 0.00 0.00 4445.05 180.60 132501.73 00:06:18.124 00:06:18.124 real 0m2.273s 00:06:18.124 user 0m1.496s 00:06:18.124 sys 0m0.647s 00:06:18.124 18:20:10 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.124 18:20:10 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:18.124 ************************************ 00:06:18.124 END TEST bdev_write_zeroes 00:06:18.124 ************************************ 00:06:18.124 18:20:10 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:18.124 18:20:10 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:18.124 18:20:10 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:18.124 18:20:10 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.124 18:20:10 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:18.124 ************************************ 00:06:18.124 START TEST bdev_json_nonenclosed 00:06:18.124 ************************************ 00:06:18.124 18:20:10 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:18.124 [2024-07-15 18:20:10.421487] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:18.124 [2024-07-15 18:20:10.421727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:18.692 EAL: TSC is not safe to use in SMP mode 00:06:18.692 EAL: TSC is not invariant 00:06:18.692 [2024-07-15 18:20:11.029052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.951 [2024-07-15 18:20:11.134351] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:18.951 [2024-07-15 18:20:11.136553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.951 [2024-07-15 18:20:11.136598] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:18.951 [2024-07-15 18:20:11.136609] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:18.951 [2024-07-15 18:20:11.136621] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.951 00:06:18.951 real 0m0.871s 00:06:18.951 user 0m0.213s 00:06:18.951 sys 0m0.656s 00:06:18.951 18:20:11 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:06:18.951 18:20:11 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.951 18:20:11 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:18.951 ************************************ 00:06:18.951 END TEST bdev_json_nonenclosed 00:06:18.951 ************************************ 00:06:19.210 18:20:11 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:06:19.210 18:20:11 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:06:19.210 18:20:11 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:19.210 18:20:11 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:19.210 18:20:11 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.210 18:20:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:19.210 ************************************ 00:06:19.210 START TEST bdev_json_nonarray 00:06:19.210 ************************************ 00:06:19.210 18:20:11 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:19.210 [2024-07-15 18:20:11.332692] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:19.210 [2024-07-15 18:20:11.332858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:19.778 EAL: TSC is not safe to use in SMP mode 00:06:19.778 EAL: TSC is not invariant 00:06:19.778 [2024-07-15 18:20:11.930606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.778 [2024-07-15 18:20:12.037243] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:19.778 [2024-07-15 18:20:12.039408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.778 [2024-07-15 18:20:12.039452] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:19.778 [2024-07-15 18:20:12.039462] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:19.778 [2024-07-15 18:20:12.039471] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.037 00:06:20.037 real 0m0.857s 00:06:20.037 user 0m0.216s 00:06:20.037 sys 0m0.640s 00:06:20.037 ************************************ 00:06:20.037 END TEST bdev_json_nonarray 00:06:20.037 18:20:12 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:06:20.037 18:20:12 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.037 18:20:12 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:20.037 ************************************ 00:06:20.037 18:20:12 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:06:20.037 18:20:12 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:06:20.037 18:20:12 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:06:20.037 18:20:12 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:06:20.037 18:20:12 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:20.037 18:20:12 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.037 18:20:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:20.037 ************************************ 00:06:20.037 START TEST bdev_qos 00:06:20.037 ************************************ 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48158 00:06:20.037 Process qos testing pid: 48158 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48158' 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48158 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 48158 ']' 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.037 18:20:12 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:20.037 [2024-07-15 18:20:12.236411] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:20.037 [2024-07-15 18:20:12.236643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:20.604 EAL: TSC is not safe to use in SMP mode 00:06:20.605 EAL: TSC is not invariant 00:06:20.605 [2024-07-15 18:20:12.822848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.605 [2024-07-15 18:20:12.949390] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:20.605 [2024-07-15 18:20:12.952129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:21.171 Malloc_0 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.171 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:21.171 [ 00:06:21.171 { 00:06:21.171 "name": "Malloc_0", 00:06:21.171 "aliases": [ 00:06:21.171 "e0bfbca7-42d6-11ef-9ade-d5fc5159efa5" 00:06:21.171 ], 00:06:21.171 "product_name": "Malloc disk", 00:06:21.171 "block_size": 512, 00:06:21.171 "num_blocks": 262144, 00:06:21.171 "uuid": "e0bfbca7-42d6-11ef-9ade-d5fc5159efa5", 00:06:21.171 "assigned_rate_limits": { 00:06:21.171 "rw_ios_per_sec": 0, 00:06:21.171 "rw_mbytes_per_sec": 0, 00:06:21.171 "r_mbytes_per_sec": 0, 00:06:21.171 "w_mbytes_per_sec": 0 00:06:21.171 }, 00:06:21.171 "claimed": false, 00:06:21.171 "zoned": false, 00:06:21.171 "supported_io_types": { 00:06:21.171 "read": true, 00:06:21.171 "write": true, 00:06:21.171 "unmap": true, 00:06:21.171 "flush": true, 00:06:21.171 "reset": true, 00:06:21.171 "nvme_admin": false, 00:06:21.171 "nvme_io": false, 00:06:21.171 "nvme_io_md": false, 00:06:21.171 "write_zeroes": true, 00:06:21.171 "zcopy": true, 00:06:21.171 "get_zone_info": false, 00:06:21.171 "zone_management": false, 00:06:21.171 "zone_append": false, 00:06:21.171 "compare": false, 00:06:21.171 "compare_and_write": false, 00:06:21.171 "abort": true, 00:06:21.171 "seek_hole": false, 00:06:21.171 "seek_data": false, 00:06:21.171 "copy": true, 00:06:21.171 "nvme_iov_md": false 00:06:21.171 }, 00:06:21.171 "memory_domains": [ 00:06:21.171 { 00:06:21.171 "dma_device_id": "system", 00:06:21.171 "dma_device_type": 1 00:06:21.171 }, 00:06:21.171 { 00:06:21.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.171 "dma_device_type": 2 00:06:21.172 } 00:06:21.172 ], 00:06:21.172 "driver_specific": {} 00:06:21.172 } 00:06:21.172 ] 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:21.172 Null_1 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:21.172 [ 00:06:21.172 { 00:06:21.172 "name": "Null_1", 00:06:21.172 "aliases": [ 00:06:21.172 "e0c49dba-42d6-11ef-9ade-d5fc5159efa5" 00:06:21.172 ], 00:06:21.172 "product_name": "Null disk", 00:06:21.172 "block_size": 512, 00:06:21.172 "num_blocks": 262144, 00:06:21.172 "uuid": "e0c49dba-42d6-11ef-9ade-d5fc5159efa5", 00:06:21.172 "assigned_rate_limits": { 00:06:21.172 "rw_ios_per_sec": 0, 00:06:21.172 "rw_mbytes_per_sec": 0, 00:06:21.172 "r_mbytes_per_sec": 0, 00:06:21.172 "w_mbytes_per_sec": 0 00:06:21.172 }, 00:06:21.172 "claimed": false, 00:06:21.172 "zoned": false, 00:06:21.172 "supported_io_types": { 00:06:21.172 "read": true, 00:06:21.172 "write": true, 00:06:21.172 "unmap": false, 00:06:21.172 "flush": false, 00:06:21.172 "reset": true, 00:06:21.172 "nvme_admin": false, 00:06:21.172 "nvme_io": false, 00:06:21.172 "nvme_io_md": false, 00:06:21.172 "write_zeroes": true, 00:06:21.172 "zcopy": false, 00:06:21.172 "get_zone_info": false, 00:06:21.172 "zone_management": false, 00:06:21.172 "zone_append": false, 00:06:21.172 "compare": false, 00:06:21.172 "compare_and_write": false, 00:06:21.172 "abort": true, 00:06:21.172 "seek_hole": false, 00:06:21.172 "seek_data": false, 00:06:21.172 "copy": false, 00:06:21.172 "nvme_iov_md": false 00:06:21.172 }, 00:06:21.172 "driver_specific": {} 00:06:21.172 } 00:06:21.172 ] 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:21.172 18:20:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:21.172 Running I/O for 60 seconds... 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 593955.78 2375823.11 0.00 0.00 2548736.00 0.00 0.00 ' 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=593955.78 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 593955 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=593955 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=148000 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 148000 -gt 1000 ']' 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 148000 Malloc_0 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 148000 IOPS Malloc_0 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.735 18:20:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:27.735 ************************************ 00:06:27.735 START TEST bdev_qos_iops 00:06:27.735 ************************************ 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 148000 IOPS Malloc_0 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=148000 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:27.735 18:20:18 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 147938.24 591752.96 0.00 0.00 639952.00 0.00 0.00 ' 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=147938.24 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 147938 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=147938 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=133200 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=162800 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 147938 -lt 133200 ']' 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 147938 -gt 162800 ']' 00:06:33.096 00:06:33.096 real 0m5.550s 00:06:33.096 user 0m0.129s 00:06:33.096 sys 0m0.049s 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.096 ************************************ 00:06:33.096 END TEST bdev_qos_iops 00:06:33.096 ************************************ 00:06:33.096 18:20:24 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:06:33.096 18:20:24 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:33.096 18:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:06:33.096 18:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:33.096 18:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:33.096 18:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:33.096 18:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:33.096 18:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:33.096 18:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 222694.40 890777.59 0.00 0.00 941056.00 0.00 0.00 ' 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=941056.00 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 941056 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=941056 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=91 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 91 -lt 2 ']' 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 91 Null_1 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 91 BANDWIDTH Null_1 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.361 18:20:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:38.361 ************************************ 00:06:38.361 START TEST bdev_qos_bw 00:06:38.361 ************************************ 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 91 BANDWIDTH Null_1 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=91 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:38.361 18:20:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 23307.61 93230.43 0.00 0.00 95608.00 0.00 0.00 ' 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=95608.00 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 95608 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=95608 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=93184 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=83865 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=102502 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 95608 -lt 83865 ']' 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 95608 -gt 102502 ']' 00:06:43.691 00:06:43.691 real 0m5.413s 00:06:43.691 user 0m0.151s 00:06:43.691 sys 0m0.010s 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:06:43.691 ************************************ 00:06:43.691 END TEST bdev_qos_bw 00:06:43.691 ************************************ 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.691 18:20:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:43.691 ************************************ 00:06:43.691 START TEST bdev_qos_ro_bw 00:06:43.691 ************************************ 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:43.691 18:20:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.95 2047.78 0.00 0.00 2212.00 0.00 0.00 ' 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2212.00 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2212 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2212 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2212 -lt 1843 ']' 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2212 -gt 2252 ']' 00:06:48.984 00:06:48.984 real 0m5.539s 00:06:48.984 user 0m0.128s 00:06:48.984 sys 0m0.032s 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.984 18:20:40 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:06:48.984 ************************************ 00:06:48.984 END TEST bdev_qos_ro_bw 00:06:48.984 ************************************ 00:06:48.984 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:48.984 18:20:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:06:48.984 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.984 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:49.243 00:06:49.243 Latency(us) 00:06:49.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.243 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:49.243 Malloc_0 : 28.07 202668.23 791.67 0.00 0.00 1252.16 370.50 503315.93 00:06:49.243 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:49.243 Null_1 : 28.11 226076.53 883.11 0.00 0.00 1131.75 74.01 33602.06 00:06:49.243 =================================================================================================================== 00:06:49.243 Total : 428744.76 1674.78 0.00 0.00 1188.63 74.01 503315.93 00:06:49.243 0 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48158 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 48158 ']' 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 48158 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps -c -o command 48158 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # tail -1 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:49.243 killing process with pid 48158 00:06:49.243 Received shutdown signal, test time was about 28.120668 seconds 00:06:49.243 00:06:49.243 Latency(us) 00:06:49.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.243 =================================================================================================================== 00:06:49.243 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48158' 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 48158 00:06:49.243 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 48158 00:06:49.502 18:20:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:06:49.502 00:06:49.502 real 0m29.552s 00:06:49.502 user 0m30.176s 00:06:49.502 sys 0m0.931s 00:06:49.502 ************************************ 00:06:49.502 END TEST bdev_qos 00:06:49.502 ************************************ 00:06:49.502 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.502 18:20:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:49.502 18:20:41 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:49.502 18:20:41 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:06:49.502 18:20:41 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:49.502 18:20:41 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.502 18:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:49.502 ************************************ 00:06:49.502 START TEST bdev_qd_sampling 00:06:49.502 ************************************ 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=48383 00:06:49.502 Process bdev QD sampling period testing pid: 48383 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 48383' 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 48383 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 48383 ']' 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.502 18:20:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:49.502 [2024-07-15 18:20:41.833119] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:49.502 [2024-07-15 18:20:41.833287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:50.070 EAL: TSC is not safe to use in SMP mode 00:06:50.070 EAL: TSC is not invariant 00:06:50.070 [2024-07-15 18:20:42.411184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.336 [2024-07-15 18:20:42.529733] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:50.336 [2024-07-15 18:20:42.529793] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:50.336 [2024-07-15 18:20:42.532942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.336 [2024-07-15 18:20:42.532927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:50.902 Malloc_QD 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:06:50.902 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:50.903 [ 00:06:50.903 { 00:06:50.903 "name": "Malloc_QD", 00:06:50.903 "aliases": [ 00:06:50.903 "f26fff69-42d6-11ef-9ade-d5fc5159efa5" 00:06:50.903 ], 00:06:50.903 "product_name": "Malloc disk", 00:06:50.903 "block_size": 512, 00:06:50.903 "num_blocks": 262144, 00:06:50.903 "uuid": "f26fff69-42d6-11ef-9ade-d5fc5159efa5", 00:06:50.903 "assigned_rate_limits": { 00:06:50.903 "rw_ios_per_sec": 0, 00:06:50.903 "rw_mbytes_per_sec": 0, 00:06:50.903 "r_mbytes_per_sec": 0, 00:06:50.903 "w_mbytes_per_sec": 0 00:06:50.903 }, 00:06:50.903 "claimed": false, 00:06:50.903 "zoned": false, 00:06:50.903 "supported_io_types": { 00:06:50.903 "read": true, 00:06:50.903 "write": true, 00:06:50.903 "unmap": true, 00:06:50.903 "flush": true, 00:06:50.903 "reset": true, 00:06:50.903 "nvme_admin": false, 00:06:50.903 "nvme_io": false, 00:06:50.903 "nvme_io_md": false, 00:06:50.903 "write_zeroes": true, 00:06:50.903 "zcopy": true, 00:06:50.903 "get_zone_info": false, 00:06:50.903 "zone_management": false, 00:06:50.903 "zone_append": false, 00:06:50.903 "compare": false, 00:06:50.903 "compare_and_write": false, 00:06:50.903 "abort": true, 00:06:50.903 "seek_hole": false, 00:06:50.903 "seek_data": false, 00:06:50.903 "copy": true, 00:06:50.903 "nvme_iov_md": false 00:06:50.903 }, 00:06:50.903 "memory_domains": [ 00:06:50.903 { 00:06:50.903 "dma_device_id": "system", 00:06:50.903 "dma_device_type": 1 00:06:50.903 }, 00:06:50.903 { 00:06:50.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.903 "dma_device_type": 2 00:06:50.903 } 00:06:50.903 ], 00:06:50.903 "driver_specific": {} 00:06:50.903 } 00:06:50.903 ] 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:06:50.903 18:20:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:50.903 Running I/O for 5 seconds... 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:06:52.806 "tick_rate": 2200002400, 00:06:52.806 "ticks": 750528137393, 00:06:52.806 "bdevs": [ 00:06:52.806 { 00:06:52.806 "name": "Malloc_QD", 00:06:52.806 "bytes_read": 12399448576, 00:06:52.806 "num_read_ops": 3027203, 00:06:52.806 "bytes_written": 0, 00:06:52.806 "num_write_ops": 0, 00:06:52.806 "bytes_unmapped": 0, 00:06:52.806 "num_unmap_ops": 0, 00:06:52.806 "bytes_copied": 0, 00:06:52.806 "num_copy_ops": 0, 00:06:52.806 "read_latency_ticks": 2217418880900, 00:06:52.806 "max_read_latency_ticks": 1009040, 00:06:52.806 "min_read_latency_ticks": 48476, 00:06:52.806 "write_latency_ticks": 0, 00:06:52.806 "max_write_latency_ticks": 0, 00:06:52.806 "min_write_latency_ticks": 0, 00:06:52.806 "unmap_latency_ticks": 0, 00:06:52.806 "max_unmap_latency_ticks": 0, 00:06:52.806 "min_unmap_latency_ticks": 0, 00:06:52.806 "copy_latency_ticks": 0, 00:06:52.806 "max_copy_latency_ticks": 0, 00:06:52.806 "min_copy_latency_ticks": 0, 00:06:52.806 "io_error": {}, 00:06:52.806 "queue_depth_polling_period": 10, 00:06:52.806 "queue_depth": 512, 00:06:52.806 "io_time": 360, 00:06:52.806 "weighted_io_time": 184320 00:06:52.806 } 00:06:52.806 ] 00:06:52.806 }' 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:52.806 00:06:52.806 Latency(us) 00:06:52.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.806 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:52.806 Malloc_QD : 2.00 760008.68 2968.78 0.00 0.00 336.57 59.11 459.87 00:06:52.806 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:52.806 Malloc_QD : 2.00 776432.59 3032.94 0.00 0.00 329.44 55.16 452.42 00:06:52.806 =================================================================================================================== 00:06:52.806 Total : 1536441.27 6001.72 0.00 0.00 332.97 55.16 459.87 00:06:52.806 0 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 48383 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 48383 ']' 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 48383 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps -c -o command 48383 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # tail -1 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:52.806 killing process with pid 48383 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48383' 00:06:52.806 Received shutdown signal, test time was about 2.031205 seconds 00:06:52.806 00:06:52.806 Latency(us) 00:06:52.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.806 =================================================================================================================== 00:06:52.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 48383 00:06:52.806 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 48383 00:06:53.065 18:20:45 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:06:53.065 00:06:53.065 real 0m3.514s 00:06:53.065 user 0m6.175s 00:06:53.065 sys 0m0.725s 00:06:53.065 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.065 18:20:45 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:53.065 ************************************ 00:06:53.065 END TEST bdev_qd_sampling 00:06:53.065 ************************************ 00:06:53.065 18:20:45 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:53.065 18:20:45 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:06:53.065 18:20:45 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.065 18:20:45 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.065 18:20:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:53.065 ************************************ 00:06:53.065 START TEST bdev_error 00:06:53.065 ************************************ 00:06:53.065 18:20:45 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:06:53.065 18:20:45 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:06:53.065 18:20:45 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:06:53.065 18:20:45 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:06:53.065 18:20:45 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=48426 00:06:53.065 18:20:45 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 48426' 00:06:53.065 Process error testing pid: 48426 00:06:53.065 18:20:45 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 48426 00:06:53.065 18:20:45 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48426 ']' 00:06:53.065 18:20:45 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:06:53.065 18:20:45 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.065 18:20:45 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.065 18:20:45 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.065 18:20:45 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.065 18:20:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:53.065 [2024-07-15 18:20:45.393370] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:53.065 [2024-07-15 18:20:45.393549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:53.632 EAL: TSC is not safe to use in SMP mode 00:06:53.632 EAL: TSC is not invariant 00:06:53.632 [2024-07-15 18:20:45.987758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.891 [2024-07-15 18:20:46.094734] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:53.891 [2024-07-15 18:20:46.097003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:06:54.150 18:20:46 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:54.150 Dev_1 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.150 18:20:46 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.150 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:54.150 [ 00:06:54.150 { 00:06:54.150 "name": "Dev_1", 00:06:54.150 "aliases": [ 00:06:54.150 "f482c322-42d6-11ef-9ade-d5fc5159efa5" 00:06:54.150 ], 00:06:54.150 "product_name": "Malloc disk", 00:06:54.150 "block_size": 512, 00:06:54.150 "num_blocks": 262144, 00:06:54.150 "uuid": "f482c322-42d6-11ef-9ade-d5fc5159efa5", 00:06:54.150 "assigned_rate_limits": { 00:06:54.150 "rw_ios_per_sec": 0, 00:06:54.150 "rw_mbytes_per_sec": 0, 00:06:54.150 "r_mbytes_per_sec": 0, 00:06:54.150 "w_mbytes_per_sec": 0 00:06:54.150 }, 00:06:54.150 "claimed": false, 00:06:54.150 "zoned": false, 00:06:54.150 "supported_io_types": { 00:06:54.150 "read": true, 00:06:54.150 "write": true, 00:06:54.150 "unmap": true, 00:06:54.150 "flush": true, 00:06:54.150 "reset": true, 00:06:54.150 "nvme_admin": false, 00:06:54.150 "nvme_io": false, 00:06:54.150 "nvme_io_md": false, 00:06:54.150 "write_zeroes": true, 00:06:54.150 "zcopy": true, 00:06:54.150 "get_zone_info": false, 00:06:54.150 "zone_management": false, 00:06:54.150 "zone_append": false, 00:06:54.150 "compare": false, 00:06:54.150 "compare_and_write": false, 00:06:54.150 "abort": true, 00:06:54.150 "seek_hole": false, 00:06:54.151 "seek_data": false, 00:06:54.151 "copy": true, 00:06:54.151 "nvme_iov_md": false 00:06:54.151 }, 00:06:54.151 "memory_domains": [ 00:06:54.151 { 00:06:54.151 "dma_device_id": "system", 00:06:54.151 "dma_device_type": 1 00:06:54.151 }, 00:06:54.151 { 00:06:54.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.151 "dma_device_type": 2 00:06:54.151 } 00:06:54.151 ], 00:06:54.151 "driver_specific": {} 00:06:54.151 } 00:06:54.151 ] 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:54.151 18:20:46 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:54.151 true 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.151 18:20:46 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:54.151 Dev_2 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.151 18:20:46 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:54.151 [ 00:06:54.151 { 00:06:54.151 "name": "Dev_2", 00:06:54.151 "aliases": [ 00:06:54.151 "f488dcc3-42d6-11ef-9ade-d5fc5159efa5" 00:06:54.151 ], 00:06:54.151 "product_name": "Malloc disk", 00:06:54.151 "block_size": 512, 00:06:54.151 "num_blocks": 262144, 00:06:54.151 "uuid": "f488dcc3-42d6-11ef-9ade-d5fc5159efa5", 00:06:54.151 "assigned_rate_limits": { 00:06:54.151 "rw_ios_per_sec": 0, 00:06:54.151 "rw_mbytes_per_sec": 0, 00:06:54.151 "r_mbytes_per_sec": 0, 00:06:54.151 "w_mbytes_per_sec": 0 00:06:54.151 }, 00:06:54.151 "claimed": false, 00:06:54.151 "zoned": false, 00:06:54.151 "supported_io_types": { 00:06:54.151 "read": true, 00:06:54.151 "write": true, 00:06:54.151 "unmap": true, 00:06:54.151 "flush": true, 00:06:54.151 "reset": true, 00:06:54.151 "nvme_admin": false, 00:06:54.151 "nvme_io": false, 00:06:54.151 "nvme_io_md": false, 00:06:54.151 "write_zeroes": true, 00:06:54.151 "zcopy": true, 00:06:54.151 "get_zone_info": false, 00:06:54.151 "zone_management": false, 00:06:54.151 "zone_append": false, 00:06:54.151 "compare": false, 00:06:54.151 "compare_and_write": false, 00:06:54.151 "abort": true, 00:06:54.151 "seek_hole": false, 00:06:54.151 "seek_data": false, 00:06:54.151 "copy": true, 00:06:54.151 "nvme_iov_md": false 00:06:54.151 }, 00:06:54.151 "memory_domains": [ 00:06:54.151 { 00:06:54.151 "dma_device_id": "system", 00:06:54.151 "dma_device_type": 1 00:06:54.151 }, 00:06:54.151 { 00:06:54.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.151 "dma_device_type": 2 00:06:54.151 } 00:06:54.151 ], 00:06:54.151 "driver_specific": {} 00:06:54.151 } 00:06:54.151 ] 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:54.151 18:20:46 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.151 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:54.410 18:20:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.410 18:20:46 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:06:54.410 18:20:46 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:54.410 Running I/O for 5 seconds... 00:06:55.347 18:20:47 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 48426 00:06:55.347 Process is existed as continue on error is set. Pid: 48426 00:06:55.347 18:20:47 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 48426' 00:06:55.347 18:20:47 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:06:55.347 18:20:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.347 18:20:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:55.347 18:20:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.347 18:20:47 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:06:55.347 18:20:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.347 18:20:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:55.347 18:20:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.347 18:20:47 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:06:55.347 Timeout while waiting for response: 00:06:55.347 00:06:55.347 00:06:59.536 00:06:59.536 Latency(us) 00:06:59.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.536 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:59.536 EE_Dev_1 : 0.92 316448.11 1236.13 5.46 0.00 50.33 24.67 137.77 00:06:59.536 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:59.536 Dev_2 : 5.00 699112.68 2730.91 0.00 0.00 22.67 5.73 23473.78 00:06:59.536 =================================================================================================================== 00:06:59.536 Total : 1015560.79 3967.03 5.46 0.00 24.79 5.73 23473.78 00:07:00.910 18:20:52 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 48426 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 48426 ']' 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 48426 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps -c -o command 48426 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # tail -1 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48426' 00:07:00.910 killing process with pid 48426 00:07:00.910 Received shutdown signal, test time was about 5.000000 seconds 00:07:00.910 00:07:00.910 Latency(us) 00:07:00.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.910 =================================================================================================================== 00:07:00.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 48426 00:07:00.910 18:20:52 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 48426 00:07:00.910 18:20:53 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=48466 00:07:00.910 Process error testing pid: 48466 00:07:00.910 18:20:53 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 48466' 00:07:00.910 18:20:53 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 48466 00:07:00.910 18:20:53 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48466 ']' 00:07:00.910 18:20:53 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.910 18:20:53 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.910 18:20:53 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.910 18:20:53 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.910 18:20:53 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:07:00.910 18:20:53 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.910 [2024-07-15 18:20:53.148072] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:00.910 [2024-07-15 18:20:53.148342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:01.474 EAL: TSC is not safe to use in SMP mode 00:07:01.474 EAL: TSC is not invariant 00:07:01.474 [2024-07-15 18:20:53.769756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.791 [2024-07-15 18:20:53.893099] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:01.791 [2024-07-15 18:20:53.895780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:07:02.049 18:20:54 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.049 Dev_1 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.049 [ 00:07:02.049 { 00:07:02.049 "name": "Dev_1", 00:07:02.049 "aliases": [ 00:07:02.049 "f92d748b-42d6-11ef-9ade-d5fc5159efa5" 00:07:02.049 ], 00:07:02.049 "product_name": "Malloc disk", 00:07:02.049 "block_size": 512, 00:07:02.049 "num_blocks": 262144, 00:07:02.049 "uuid": "f92d748b-42d6-11ef-9ade-d5fc5159efa5", 00:07:02.049 "assigned_rate_limits": { 00:07:02.049 "rw_ios_per_sec": 0, 00:07:02.049 "rw_mbytes_per_sec": 0, 00:07:02.049 "r_mbytes_per_sec": 0, 00:07:02.049 "w_mbytes_per_sec": 0 00:07:02.049 }, 00:07:02.049 "claimed": false, 00:07:02.049 "zoned": false, 00:07:02.049 "supported_io_types": { 00:07:02.049 "read": true, 00:07:02.049 "write": true, 00:07:02.049 "unmap": true, 00:07:02.049 "flush": true, 00:07:02.049 "reset": true, 00:07:02.049 "nvme_admin": false, 00:07:02.049 "nvme_io": false, 00:07:02.049 "nvme_io_md": false, 00:07:02.049 "write_zeroes": true, 00:07:02.049 "zcopy": true, 00:07:02.049 "get_zone_info": false, 00:07:02.049 "zone_management": false, 00:07:02.049 "zone_append": false, 00:07:02.049 "compare": false, 00:07:02.049 "compare_and_write": false, 00:07:02.049 "abort": true, 00:07:02.049 "seek_hole": false, 00:07:02.049 "seek_data": false, 00:07:02.049 "copy": true, 00:07:02.049 "nvme_iov_md": false 00:07:02.049 }, 00:07:02.049 "memory_domains": [ 00:07:02.049 { 00:07:02.049 "dma_device_id": "system", 00:07:02.049 "dma_device_type": 1 00:07:02.049 }, 00:07:02.049 { 00:07:02.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.049 "dma_device_type": 2 00:07:02.049 } 00:07:02.049 ], 00:07:02.049 "driver_specific": {} 00:07:02.049 } 00:07:02.049 ] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:02.049 18:20:54 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.049 true 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.049 Dev_2 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.049 [ 00:07:02.049 { 00:07:02.049 "name": "Dev_2", 00:07:02.049 "aliases": [ 00:07:02.049 "f9338e24-42d6-11ef-9ade-d5fc5159efa5" 00:07:02.049 ], 00:07:02.049 "product_name": "Malloc disk", 00:07:02.049 "block_size": 512, 00:07:02.049 "num_blocks": 262144, 00:07:02.049 "uuid": "f9338e24-42d6-11ef-9ade-d5fc5159efa5", 00:07:02.049 "assigned_rate_limits": { 00:07:02.049 "rw_ios_per_sec": 0, 00:07:02.049 "rw_mbytes_per_sec": 0, 00:07:02.049 "r_mbytes_per_sec": 0, 00:07:02.049 "w_mbytes_per_sec": 0 00:07:02.049 }, 00:07:02.049 "claimed": false, 00:07:02.049 "zoned": false, 00:07:02.049 "supported_io_types": { 00:07:02.049 "read": true, 00:07:02.049 "write": true, 00:07:02.049 "unmap": true, 00:07:02.049 "flush": true, 00:07:02.049 "reset": true, 00:07:02.049 "nvme_admin": false, 00:07:02.049 "nvme_io": false, 00:07:02.049 "nvme_io_md": false, 00:07:02.049 "write_zeroes": true, 00:07:02.049 "zcopy": true, 00:07:02.049 "get_zone_info": false, 00:07:02.049 "zone_management": false, 00:07:02.049 "zone_append": false, 00:07:02.049 "compare": false, 00:07:02.049 "compare_and_write": false, 00:07:02.049 "abort": true, 00:07:02.049 "seek_hole": false, 00:07:02.049 "seek_data": false, 00:07:02.049 "copy": true, 00:07:02.049 "nvme_iov_md": false 00:07:02.049 }, 00:07:02.049 "memory_domains": [ 00:07:02.049 { 00:07:02.049 "dma_device_id": "system", 00:07:02.049 "dma_device_type": 1 00:07:02.049 }, 00:07:02.049 { 00:07:02.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.049 "dma_device_type": 2 00:07:02.049 } 00:07:02.049 ], 00:07:02.049 "driver_specific": {} 00:07:02.049 } 00:07:02.049 ] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:02.049 18:20:54 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.049 18:20:54 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 48466 00:07:02.049 18:20:54 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48466 00:07:02.049 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:07:02.050 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.050 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:07:02.050 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.050 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48466 00:07:02.307 Running I/O for 5 seconds... 00:07:02.307 task offset: 224672 on job bdev=EE_Dev_1 fails 00:07:02.307 00:07:02.307 Latency(us) 00:07:02.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.307 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:02.307 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:07:02.307 EE_Dev_1 : 0.00 162962.96 636.57 37037.04 0.00 65.29 24.20 122.88 00:07:02.307 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:02.307 Dev_2 : 0.00 196319.02 766.87 0.00 0.00 37.77 24.44 55.16 00:07:02.307 =================================================================================================================== 00:07:02.307 Total : 359281.98 1403.45 37037.04 0.00 50.36 24.20 122.88 00:07:02.307 [2024-07-15 18:20:54.477926] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.307 request: 00:07:02.307 { 00:07:02.307 "method": "perform_tests", 00:07:02.307 "req_id": 1 00:07:02.307 } 00:07:02.307 Got JSON-RPC error response 00:07:02.307 response: 00:07:02.307 { 00:07:02.307 "code": -32603, 00:07:02.307 "message": "bdevperf failed with error Operation not permitted" 00:07:02.307 } 00:07:02.565 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:07:02.565 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.565 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:07:02.565 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:07:02.565 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:07:02.565 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.565 00:07:02.565 real 0m9.360s 00:07:02.565 user 0m9.366s 00:07:02.565 sys 0m1.413s 00:07:02.565 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.565 18:20:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 ************************************ 00:07:02.565 END TEST bdev_error 00:07:02.565 ************************************ 00:07:02.565 18:20:54 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:02.565 18:20:54 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:07:02.565 18:20:54 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.565 18:20:54 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.565 18:20:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 ************************************ 00:07:02.565 START TEST bdev_stat 00:07:02.565 ************************************ 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=48497 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:07:02.565 Process Bdev IO statistics testing pid: 48497 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 48497' 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 48497 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 48497 ']' 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.565 18:20:54 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 [2024-07-15 18:20:54.800095] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:02.565 [2024-07-15 18:20:54.800286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:03.132 EAL: TSC is not safe to use in SMP mode 00:07:03.132 EAL: TSC is not invariant 00:07:03.132 [2024-07-15 18:20:55.432151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.390 [2024-07-15 18:20:55.542758] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:03.390 [2024-07-15 18:20:55.542824] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:03.390 [2024-07-15 18:20:55.545691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.390 [2024-07-15 18:20:55.545680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:03.649 Malloc_STAT 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:03.649 [ 00:07:03.649 { 00:07:03.649 "name": "Malloc_STAT", 00:07:03.649 "aliases": [ 00:07:03.649 "fa1d3e31-42d6-11ef-9ade-d5fc5159efa5" 00:07:03.649 ], 00:07:03.649 "product_name": "Malloc disk", 00:07:03.649 "block_size": 512, 00:07:03.649 "num_blocks": 262144, 00:07:03.649 "uuid": "fa1d3e31-42d6-11ef-9ade-d5fc5159efa5", 00:07:03.649 "assigned_rate_limits": { 00:07:03.649 "rw_ios_per_sec": 0, 00:07:03.649 "rw_mbytes_per_sec": 0, 00:07:03.649 "r_mbytes_per_sec": 0, 00:07:03.649 "w_mbytes_per_sec": 0 00:07:03.649 }, 00:07:03.649 "claimed": false, 00:07:03.649 "zoned": false, 00:07:03.649 "supported_io_types": { 00:07:03.649 "read": true, 00:07:03.649 "write": true, 00:07:03.649 "unmap": true, 00:07:03.649 "flush": true, 00:07:03.649 "reset": true, 00:07:03.649 "nvme_admin": false, 00:07:03.649 "nvme_io": false, 00:07:03.649 "nvme_io_md": false, 00:07:03.649 "write_zeroes": true, 00:07:03.649 "zcopy": true, 00:07:03.649 "get_zone_info": false, 00:07:03.649 "zone_management": false, 00:07:03.649 "zone_append": false, 00:07:03.649 "compare": false, 00:07:03.649 "compare_and_write": false, 00:07:03.649 "abort": true, 00:07:03.649 "seek_hole": false, 00:07:03.649 "seek_data": false, 00:07:03.649 "copy": true, 00:07:03.649 "nvme_iov_md": false 00:07:03.649 }, 00:07:03.649 "memory_domains": [ 00:07:03.649 { 00:07:03.649 "dma_device_id": "system", 00:07:03.649 "dma_device_type": 1 00:07:03.649 }, 00:07:03.649 { 00:07:03.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.649 "dma_device_type": 2 00:07:03.649 } 00:07:03.649 ], 00:07:03.649 "driver_specific": {} 00:07:03.649 } 00:07:03.649 ] 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:07:03.649 18:20:55 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:03.649 Running I/O for 10 seconds... 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.237 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:07:06.237 "tick_rate": 2200002400, 00:07:06.237 "ticks": 779003842158, 00:07:06.237 "bdevs": [ 00:07:06.237 { 00:07:06.237 "name": "Malloc_STAT", 00:07:06.237 "bytes_read": 12322902528, 00:07:06.237 "num_read_ops": 3008515, 00:07:06.238 "bytes_written": 0, 00:07:06.238 "num_write_ops": 0, 00:07:06.238 "bytes_unmapped": 0, 00:07:06.238 "num_unmap_ops": 0, 00:07:06.238 "bytes_copied": 0, 00:07:06.238 "num_copy_ops": 0, 00:07:06.238 "read_latency_ticks": 2262569299049, 00:07:06.238 "max_read_latency_ticks": 1355024, 00:07:06.238 "min_read_latency_ticks": 40672, 00:07:06.238 "write_latency_ticks": 0, 00:07:06.238 "max_write_latency_ticks": 0, 00:07:06.238 "min_write_latency_ticks": 0, 00:07:06.238 "unmap_latency_ticks": 0, 00:07:06.238 "max_unmap_latency_ticks": 0, 00:07:06.238 "min_unmap_latency_ticks": 0, 00:07:06.238 "copy_latency_ticks": 0, 00:07:06.238 "max_copy_latency_ticks": 0, 00:07:06.238 "min_copy_latency_ticks": 0, 00:07:06.238 "io_error": {} 00:07:06.238 } 00:07:06.238 ] 00:07:06.238 }' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=3008515 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:07:06.238 "tick_rate": 2200002400, 00:07:06.238 "ticks": 779058183795, 00:07:06.238 "name": "Malloc_STAT", 00:07:06.238 "channels": [ 00:07:06.238 { 00:07:06.238 "thread_id": 2, 00:07:06.238 "bytes_read": 6208618496, 00:07:06.238 "num_read_ops": 1515776, 00:07:06.238 "bytes_written": 0, 00:07:06.238 "num_write_ops": 0, 00:07:06.238 "bytes_unmapped": 0, 00:07:06.238 "num_unmap_ops": 0, 00:07:06.238 "bytes_copied": 0, 00:07:06.238 "num_copy_ops": 0, 00:07:06.238 "read_latency_ticks": 1145158004042, 00:07:06.238 "max_read_latency_ticks": 1355024, 00:07:06.238 "min_read_latency_ticks": 671719, 00:07:06.238 "write_latency_ticks": 0, 00:07:06.238 "max_write_latency_ticks": 0, 00:07:06.238 "min_write_latency_ticks": 0, 00:07:06.238 "unmap_latency_ticks": 0, 00:07:06.238 "max_unmap_latency_ticks": 0, 00:07:06.238 "min_unmap_latency_ticks": 0, 00:07:06.238 "copy_latency_ticks": 0, 00:07:06.238 "max_copy_latency_ticks": 0, 00:07:06.238 "min_copy_latency_ticks": 0 00:07:06.238 }, 00:07:06.238 { 00:07:06.238 "thread_id": 3, 00:07:06.238 "bytes_read": 6268387328, 00:07:06.238 "num_read_ops": 1530368, 00:07:06.238 "bytes_written": 0, 00:07:06.238 "num_write_ops": 0, 00:07:06.238 "bytes_unmapped": 0, 00:07:06.238 "num_unmap_ops": 0, 00:07:06.238 "bytes_copied": 0, 00:07:06.238 "num_copy_ops": 0, 00:07:06.238 "read_latency_ticks": 1145338695267, 00:07:06.238 "max_read_latency_ticks": 990959, 00:07:06.238 "min_read_latency_ticks": 669089, 00:07:06.238 "write_latency_ticks": 0, 00:07:06.238 "max_write_latency_ticks": 0, 00:07:06.238 "min_write_latency_ticks": 0, 00:07:06.238 "unmap_latency_ticks": 0, 00:07:06.238 "max_unmap_latency_ticks": 0, 00:07:06.238 "min_unmap_latency_ticks": 0, 00:07:06.238 "copy_latency_ticks": 0, 00:07:06.238 "max_copy_latency_ticks": 0, 00:07:06.238 "min_copy_latency_ticks": 0 00:07:06.238 } 00:07:06.238 ] 00:07:06.238 }' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1515776 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1515776 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1530368 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=3046144 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:07:06.238 "tick_rate": 2200002400, 00:07:06.238 "ticks": 779141802885, 00:07:06.238 "bdevs": [ 00:07:06.238 { 00:07:06.238 "name": "Malloc_STAT", 00:07:06.238 "bytes_read": 12701438464, 00:07:06.238 "num_read_ops": 3100931, 00:07:06.238 "bytes_written": 0, 00:07:06.238 "num_write_ops": 0, 00:07:06.238 "bytes_unmapped": 0, 00:07:06.238 "num_unmap_ops": 0, 00:07:06.238 "bytes_copied": 0, 00:07:06.238 "num_copy_ops": 0, 00:07:06.238 "read_latency_ticks": 2333139629653, 00:07:06.238 "max_read_latency_ticks": 1355024, 00:07:06.238 "min_read_latency_ticks": 40672, 00:07:06.238 "write_latency_ticks": 0, 00:07:06.238 "max_write_latency_ticks": 0, 00:07:06.238 "min_write_latency_ticks": 0, 00:07:06.238 "unmap_latency_ticks": 0, 00:07:06.238 "max_unmap_latency_ticks": 0, 00:07:06.238 "min_unmap_latency_ticks": 0, 00:07:06.238 "copy_latency_ticks": 0, 00:07:06.238 "max_copy_latency_ticks": 0, 00:07:06.238 "min_copy_latency_ticks": 0, 00:07:06.238 "io_error": {} 00:07:06.238 } 00:07:06.238 ] 00:07:06.238 }' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=3100931 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3046144 -lt 3008515 ']' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3046144 -gt 3100931 ']' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:06.238 00:07:06.238 Latency(us) 00:07:06.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.238 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:07:06.238 Malloc_STAT : 2.10 744128.60 2906.75 0.00 0.00 343.76 57.72 618.12 00:07:06.238 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:06.238 Malloc_STAT : 2.10 751122.31 2934.07 0.00 0.00 340.56 64.70 450.56 00:07:06.238 =================================================================================================================== 00:07:06.238 Total : 1495250.91 5840.82 0.00 0.00 342.15 57.72 618.12 00:07:06.238 0 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 48497 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 48497 ']' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 48497 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps -c -o command 48497 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # tail -1 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:06.238 killing process with pid 48497 00:07:06.238 Received shutdown signal, test time was about 2.134094 seconds 00:07:06.238 00:07:06.238 Latency(us) 00:07:06.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.238 =================================================================================================================== 00:07:06.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48497' 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 48497 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 48497 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:07:06.238 00:07:06.238 real 0m3.555s 00:07:06.238 user 0m6.169s 00:07:06.238 sys 0m0.787s 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.238 18:20:58 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:06.238 ************************************ 00:07:06.238 END TEST bdev_stat 00:07:06.238 ************************************ 00:07:06.238 18:20:58 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:07:06.238 18:20:58 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:07:06.238 00:07:06.238 real 1m34.814s 00:07:06.238 user 4m30.945s 00:07:06.238 sys 0m24.819s 00:07:06.238 18:20:58 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.238 18:20:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:06.238 ************************************ 00:07:06.238 END TEST blockdev_general 00:07:06.238 ************************************ 00:07:06.238 18:20:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:06.238 18:20:58 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:06.238 18:20:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.238 18:20:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.238 18:20:58 -- common/autotest_common.sh@10 -- # set +x 00:07:06.238 ************************************ 00:07:06.238 START TEST bdev_raid 00:07:06.238 ************************************ 00:07:06.239 18:20:58 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:06.239 * Looking for test storage... 00:07:06.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:06.239 18:20:58 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:06.239 18:20:58 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.239 18:20:58 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:07:06.239 18:20:58 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:07:06.239 18:20:58 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:07:06.239 18:20:58 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:07:06.239 18:20:58 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:07:06.239 18:20:58 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:07:06.239 18:20:58 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:07:06.239 18:20:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.239 18:20:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.239 18:20:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.496 ************************************ 00:07:06.496 START TEST raid0_resize_test 00:07:06.497 ************************************ 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=48602 00:07:06.497 Process raid pid: 48602 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 48602' 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 48602 /var/tmp/spdk-raid.sock 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 48602 ']' 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.497 18:20:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.497 [2024-07-15 18:20:58.612072] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:06.497 [2024-07-15 18:20:58.612340] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:07.065 EAL: TSC is not safe to use in SMP mode 00:07:07.065 EAL: TSC is not invariant 00:07:07.065 [2024-07-15 18:20:59.197600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.065 [2024-07-15 18:20:59.305203] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:07.065 [2024-07-15 18:20:59.307303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.065 [2024-07-15 18:20:59.308071] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.065 [2024-07-15 18:20:59.308084] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.632 18:20:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.632 18:20:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:07:07.632 18:20:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:07:07.632 Base_1 00:07:07.632 18:20:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:07:07.890 Base_2 00:07:07.890 18:21:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:07:08.456 [2024-07-15 18:21:00.564360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:08.456 [2024-07-15 18:21:00.564948] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:08.456 [2024-07-15 18:21:00.564973] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1294d7c34a00 00:07:08.456 [2024-07-15 18:21:00.564978] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:08.456 [2024-07-15 18:21:00.565014] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1294d7c97e20 00:07:08.456 [2024-07-15 18:21:00.565076] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1294d7c34a00 00:07:08.456 [2024-07-15 18:21:00.565081] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x1294d7c34a00 00:07:08.456 [2024-07-15 18:21:00.565116] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.456 18:21:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:07:08.713 [2024-07-15 18:21:00.876355] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:08.713 [2024-07-15 18:21:00.876386] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:08.713 true 00:07:08.713 18:21:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:08.713 18:21:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:07:08.969 [2024-07-15 18:21:01.120375] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.969 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:07:08.969 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:07:08.969 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:07:08.969 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:07:09.227 [2024-07-15 18:21:01.376360] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.227 [2024-07-15 18:21:01.376388] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:09.227 [2024-07-15 18:21:01.376417] bdev_raid.c:2290:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:09.227 true 00:07:09.227 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:07:09.227 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:09.485 [2024-07-15 18:21:01.664386] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.485 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 48602 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 48602 ']' 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 48602 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps -c -o command 48602 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # tail -1 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:09.486 killing process with pid 48602 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48602' 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 48602 00:07:09.486 [2024-07-15 18:21:01.694368] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.486 [2024-07-15 18:21:01.694396] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.486 [2024-07-15 18:21:01.694409] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.486 [2024-07-15 18:21:01.694413] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1294d7c34a00 name Raid, state offline 00:07:09.486 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 48602 00:07:09.486 [2024-07-15 18:21:01.694557] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.745 18:21:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:07:09.745 00:07:09.745 real 0m3.317s 00:07:09.745 user 0m4.984s 00:07:09.745 sys 0m0.850s 00:07:09.745 ************************************ 00:07:09.745 END TEST raid0_resize_test 00:07:09.745 ************************************ 00:07:09.745 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.745 18:21:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.745 18:21:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:09.745 18:21:01 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:07:09.745 18:21:01 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:09.745 18:21:01 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:09.745 18:21:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:09.745 18:21:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.745 18:21:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.745 ************************************ 00:07:09.745 START TEST raid_state_function_test 00:07:09.745 ************************************ 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=48652 00:07:09.745 Process raid pid: 48652 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48652' 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 48652 /var/tmp/spdk-raid.sock 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 48652 ']' 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.745 18:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.745 [2024-07-15 18:21:01.981165] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:09.745 [2024-07-15 18:21:01.981329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:10.311 EAL: TSC is not safe to use in SMP mode 00:07:10.311 EAL: TSC is not invariant 00:07:10.311 [2024-07-15 18:21:02.580562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.569 [2024-07-15 18:21:02.691250] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:10.569 [2024-07-15 18:21:02.693367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.569 [2024-07-15 18:21:02.694132] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.569 [2024-07-15 18:21:02.694148] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.827 18:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.827 18:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:07:10.827 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:11.085 [2024-07-15 18:21:03.306568] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.085 [2024-07-15 18:21:03.306630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.085 [2024-07-15 18:21:03.306636] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.085 [2024-07-15 18:21:03.306645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.085 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.343 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:11.343 "name": "Existed_Raid", 00:07:11.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.344 "strip_size_kb": 64, 00:07:11.344 "state": "configuring", 00:07:11.344 "raid_level": "raid0", 00:07:11.344 "superblock": false, 00:07:11.344 "num_base_bdevs": 2, 00:07:11.344 "num_base_bdevs_discovered": 0, 00:07:11.344 "num_base_bdevs_operational": 2, 00:07:11.344 "base_bdevs_list": [ 00:07:11.344 { 00:07:11.344 "name": "BaseBdev1", 00:07:11.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.344 "is_configured": false, 00:07:11.344 "data_offset": 0, 00:07:11.344 "data_size": 0 00:07:11.344 }, 00:07:11.344 { 00:07:11.344 "name": "BaseBdev2", 00:07:11.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.344 "is_configured": false, 00:07:11.344 "data_offset": 0, 00:07:11.344 "data_size": 0 00:07:11.344 } 00:07:11.344 ] 00:07:11.344 }' 00:07:11.344 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:11.344 18:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.601 18:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:11.859 [2024-07-15 18:21:04.150602] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.859 [2024-07-15 18:21:04.150661] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26ebab434500 name Existed_Raid, state configuring 00:07:11.859 18:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:12.118 [2024-07-15 18:21:04.434670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.118 [2024-07-15 18:21:04.434757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.118 [2024-07-15 18:21:04.434763] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.118 [2024-07-15 18:21:04.434772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.118 18:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:12.376 [2024-07-15 18:21:04.683739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.376 BaseBdev1 00:07:12.376 18:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:12.376 18:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:12.376 18:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:12.376 18:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:12.376 18:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:12.376 18:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:12.376 18:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:12.634 18:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:12.940 [ 00:07:12.940 { 00:07:12.940 "name": "BaseBdev1", 00:07:12.940 "aliases": [ 00:07:12.940 "ff621458-42d6-11ef-9ade-d5fc5159efa5" 00:07:12.940 ], 00:07:12.940 "product_name": "Malloc disk", 00:07:12.940 "block_size": 512, 00:07:12.940 "num_blocks": 65536, 00:07:12.940 "uuid": "ff621458-42d6-11ef-9ade-d5fc5159efa5", 00:07:12.940 "assigned_rate_limits": { 00:07:12.940 "rw_ios_per_sec": 0, 00:07:12.940 "rw_mbytes_per_sec": 0, 00:07:12.940 "r_mbytes_per_sec": 0, 00:07:12.940 "w_mbytes_per_sec": 0 00:07:12.940 }, 00:07:12.940 "claimed": true, 00:07:12.940 "claim_type": "exclusive_write", 00:07:12.940 "zoned": false, 00:07:12.940 "supported_io_types": { 00:07:12.940 "read": true, 00:07:12.940 "write": true, 00:07:12.940 "unmap": true, 00:07:12.940 "flush": true, 00:07:12.940 "reset": true, 00:07:12.940 "nvme_admin": false, 00:07:12.940 "nvme_io": false, 00:07:12.940 "nvme_io_md": false, 00:07:12.940 "write_zeroes": true, 00:07:12.940 "zcopy": true, 00:07:12.940 "get_zone_info": false, 00:07:12.940 "zone_management": false, 00:07:12.940 "zone_append": false, 00:07:12.940 "compare": false, 00:07:12.940 "compare_and_write": false, 00:07:12.940 "abort": true, 00:07:12.940 "seek_hole": false, 00:07:12.940 "seek_data": false, 00:07:12.940 "copy": true, 00:07:12.940 "nvme_iov_md": false 00:07:12.940 }, 00:07:12.940 "memory_domains": [ 00:07:12.940 { 00:07:12.940 "dma_device_id": "system", 00:07:12.940 "dma_device_type": 1 00:07:12.940 }, 00:07:12.940 { 00:07:12.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.940 "dma_device_type": 2 00:07:12.941 } 00:07:12.941 ], 00:07:12.941 "driver_specific": {} 00:07:12.941 } 00:07:12.941 ] 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:12.941 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.198 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:13.198 "name": "Existed_Raid", 00:07:13.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.198 "strip_size_kb": 64, 00:07:13.198 "state": "configuring", 00:07:13.198 "raid_level": "raid0", 00:07:13.198 "superblock": false, 00:07:13.198 "num_base_bdevs": 2, 00:07:13.198 "num_base_bdevs_discovered": 1, 00:07:13.198 "num_base_bdevs_operational": 2, 00:07:13.198 "base_bdevs_list": [ 00:07:13.198 { 00:07:13.198 "name": "BaseBdev1", 00:07:13.198 "uuid": "ff621458-42d6-11ef-9ade-d5fc5159efa5", 00:07:13.198 "is_configured": true, 00:07:13.198 "data_offset": 0, 00:07:13.198 "data_size": 65536 00:07:13.198 }, 00:07:13.198 { 00:07:13.198 "name": "BaseBdev2", 00:07:13.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.198 "is_configured": false, 00:07:13.198 "data_offset": 0, 00:07:13.198 "data_size": 0 00:07:13.198 } 00:07:13.198 ] 00:07:13.198 }' 00:07:13.198 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:13.198 18:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.456 18:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:13.715 [2024-07-15 18:21:06.026937] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.715 [2024-07-15 18:21:06.026994] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26ebab434500 name Existed_Raid, state configuring 00:07:13.715 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:13.973 [2024-07-15 18:21:06.315005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.973 [2024-07-15 18:21:06.315804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.973 [2024-07-15 18:21:06.315842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:14.231 "name": "Existed_Raid", 00:07:14.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.231 "strip_size_kb": 64, 00:07:14.231 "state": "configuring", 00:07:14.231 "raid_level": "raid0", 00:07:14.231 "superblock": false, 00:07:14.231 "num_base_bdevs": 2, 00:07:14.231 "num_base_bdevs_discovered": 1, 00:07:14.231 "num_base_bdevs_operational": 2, 00:07:14.231 "base_bdevs_list": [ 00:07:14.231 { 00:07:14.231 "name": "BaseBdev1", 00:07:14.231 "uuid": "ff621458-42d6-11ef-9ade-d5fc5159efa5", 00:07:14.231 "is_configured": true, 00:07:14.231 "data_offset": 0, 00:07:14.231 "data_size": 65536 00:07:14.231 }, 00:07:14.231 { 00:07:14.231 "name": "BaseBdev2", 00:07:14.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.231 "is_configured": false, 00:07:14.231 "data_offset": 0, 00:07:14.231 "data_size": 0 00:07:14.231 } 00:07:14.231 ] 00:07:14.231 }' 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:14.231 18:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.797 18:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.057 [2024-07-15 18:21:07.195285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.057 [2024-07-15 18:21:07.195310] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26ebab434a00 00:07:15.057 [2024-07-15 18:21:07.195315] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:15.057 [2024-07-15 18:21:07.195337] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26ebab497e20 00:07:15.057 [2024-07-15 18:21:07.195469] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26ebab434a00 00:07:15.057 [2024-07-15 18:21:07.195473] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x26ebab434a00 00:07:15.057 [2024-07-15 18:21:07.195505] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.057 BaseBdev2 00:07:15.057 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:15.057 18:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:15.057 18:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:15.057 18:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:15.057 18:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:15.057 18:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:15.057 18:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:15.316 18:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.573 [ 00:07:15.573 { 00:07:15.573 "name": "BaseBdev2", 00:07:15.573 "aliases": [ 00:07:15.573 "00e1727d-42d7-11ef-9ade-d5fc5159efa5" 00:07:15.573 ], 00:07:15.573 "product_name": "Malloc disk", 00:07:15.573 "block_size": 512, 00:07:15.573 "num_blocks": 65536, 00:07:15.573 "uuid": "00e1727d-42d7-11ef-9ade-d5fc5159efa5", 00:07:15.573 "assigned_rate_limits": { 00:07:15.573 "rw_ios_per_sec": 0, 00:07:15.573 "rw_mbytes_per_sec": 0, 00:07:15.573 "r_mbytes_per_sec": 0, 00:07:15.573 "w_mbytes_per_sec": 0 00:07:15.573 }, 00:07:15.573 "claimed": true, 00:07:15.573 "claim_type": "exclusive_write", 00:07:15.573 "zoned": false, 00:07:15.573 "supported_io_types": { 00:07:15.573 "read": true, 00:07:15.573 "write": true, 00:07:15.573 "unmap": true, 00:07:15.573 "flush": true, 00:07:15.573 "reset": true, 00:07:15.573 "nvme_admin": false, 00:07:15.573 "nvme_io": false, 00:07:15.573 "nvme_io_md": false, 00:07:15.573 "write_zeroes": true, 00:07:15.573 "zcopy": true, 00:07:15.573 "get_zone_info": false, 00:07:15.573 "zone_management": false, 00:07:15.573 "zone_append": false, 00:07:15.573 "compare": false, 00:07:15.573 "compare_and_write": false, 00:07:15.573 "abort": true, 00:07:15.573 "seek_hole": false, 00:07:15.573 "seek_data": false, 00:07:15.573 "copy": true, 00:07:15.573 "nvme_iov_md": false 00:07:15.573 }, 00:07:15.573 "memory_domains": [ 00:07:15.573 { 00:07:15.573 "dma_device_id": "system", 00:07:15.573 "dma_device_type": 1 00:07:15.573 }, 00:07:15.573 { 00:07:15.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.573 "dma_device_type": 2 00:07:15.573 } 00:07:15.573 ], 00:07:15.573 "driver_specific": {} 00:07:15.573 } 00:07:15.573 ] 00:07:15.573 18:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:15.574 18:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.832 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:15.832 "name": "Existed_Raid", 00:07:15.832 "uuid": "00e1793a-42d7-11ef-9ade-d5fc5159efa5", 00:07:15.832 "strip_size_kb": 64, 00:07:15.832 "state": "online", 00:07:15.832 "raid_level": "raid0", 00:07:15.832 "superblock": false, 00:07:15.832 "num_base_bdevs": 2, 00:07:15.832 "num_base_bdevs_discovered": 2, 00:07:15.832 "num_base_bdevs_operational": 2, 00:07:15.832 "base_bdevs_list": [ 00:07:15.832 { 00:07:15.832 "name": "BaseBdev1", 00:07:15.832 "uuid": "ff621458-42d6-11ef-9ade-d5fc5159efa5", 00:07:15.832 "is_configured": true, 00:07:15.832 "data_offset": 0, 00:07:15.832 "data_size": 65536 00:07:15.832 }, 00:07:15.832 { 00:07:15.832 "name": "BaseBdev2", 00:07:15.832 "uuid": "00e1727d-42d7-11ef-9ade-d5fc5159efa5", 00:07:15.832 "is_configured": true, 00:07:15.832 "data_offset": 0, 00:07:15.832 "data_size": 65536 00:07:15.832 } 00:07:15.832 ] 00:07:15.832 }' 00:07:15.832 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:15.832 18:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.089 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:16.089 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:16.089 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:16.089 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:16.089 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:16.089 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:16.089 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:16.089 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:16.347 [2024-07-15 18:21:08.591429] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.347 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:16.347 "name": "Existed_Raid", 00:07:16.347 "aliases": [ 00:07:16.347 "00e1793a-42d7-11ef-9ade-d5fc5159efa5" 00:07:16.347 ], 00:07:16.347 "product_name": "Raid Volume", 00:07:16.347 "block_size": 512, 00:07:16.347 "num_blocks": 131072, 00:07:16.347 "uuid": "00e1793a-42d7-11ef-9ade-d5fc5159efa5", 00:07:16.347 "assigned_rate_limits": { 00:07:16.347 "rw_ios_per_sec": 0, 00:07:16.347 "rw_mbytes_per_sec": 0, 00:07:16.347 "r_mbytes_per_sec": 0, 00:07:16.347 "w_mbytes_per_sec": 0 00:07:16.347 }, 00:07:16.347 "claimed": false, 00:07:16.347 "zoned": false, 00:07:16.347 "supported_io_types": { 00:07:16.347 "read": true, 00:07:16.347 "write": true, 00:07:16.347 "unmap": true, 00:07:16.347 "flush": true, 00:07:16.347 "reset": true, 00:07:16.347 "nvme_admin": false, 00:07:16.347 "nvme_io": false, 00:07:16.347 "nvme_io_md": false, 00:07:16.347 "write_zeroes": true, 00:07:16.347 "zcopy": false, 00:07:16.347 "get_zone_info": false, 00:07:16.347 "zone_management": false, 00:07:16.347 "zone_append": false, 00:07:16.347 "compare": false, 00:07:16.347 "compare_and_write": false, 00:07:16.347 "abort": false, 00:07:16.347 "seek_hole": false, 00:07:16.347 "seek_data": false, 00:07:16.347 "copy": false, 00:07:16.347 "nvme_iov_md": false 00:07:16.347 }, 00:07:16.347 "memory_domains": [ 00:07:16.347 { 00:07:16.347 "dma_device_id": "system", 00:07:16.347 "dma_device_type": 1 00:07:16.347 }, 00:07:16.347 { 00:07:16.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.347 "dma_device_type": 2 00:07:16.347 }, 00:07:16.347 { 00:07:16.347 "dma_device_id": "system", 00:07:16.347 "dma_device_type": 1 00:07:16.347 }, 00:07:16.347 { 00:07:16.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.347 "dma_device_type": 2 00:07:16.347 } 00:07:16.347 ], 00:07:16.347 "driver_specific": { 00:07:16.347 "raid": { 00:07:16.347 "uuid": "00e1793a-42d7-11ef-9ade-d5fc5159efa5", 00:07:16.347 "strip_size_kb": 64, 00:07:16.347 "state": "online", 00:07:16.347 "raid_level": "raid0", 00:07:16.347 "superblock": false, 00:07:16.347 "num_base_bdevs": 2, 00:07:16.347 "num_base_bdevs_discovered": 2, 00:07:16.347 "num_base_bdevs_operational": 2, 00:07:16.347 "base_bdevs_list": [ 00:07:16.347 { 00:07:16.347 "name": "BaseBdev1", 00:07:16.347 "uuid": "ff621458-42d6-11ef-9ade-d5fc5159efa5", 00:07:16.347 "is_configured": true, 00:07:16.347 "data_offset": 0, 00:07:16.347 "data_size": 65536 00:07:16.347 }, 00:07:16.347 { 00:07:16.347 "name": "BaseBdev2", 00:07:16.347 "uuid": "00e1727d-42d7-11ef-9ade-d5fc5159efa5", 00:07:16.347 "is_configured": true, 00:07:16.347 "data_offset": 0, 00:07:16.347 "data_size": 65536 00:07:16.347 } 00:07:16.347 ] 00:07:16.347 } 00:07:16.347 } 00:07:16.347 }' 00:07:16.347 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.347 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:16.347 BaseBdev2' 00:07:16.347 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:16.347 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:16.347 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:16.605 "name": "BaseBdev1", 00:07:16.605 "aliases": [ 00:07:16.605 "ff621458-42d6-11ef-9ade-d5fc5159efa5" 00:07:16.605 ], 00:07:16.605 "product_name": "Malloc disk", 00:07:16.605 "block_size": 512, 00:07:16.605 "num_blocks": 65536, 00:07:16.605 "uuid": "ff621458-42d6-11ef-9ade-d5fc5159efa5", 00:07:16.605 "assigned_rate_limits": { 00:07:16.605 "rw_ios_per_sec": 0, 00:07:16.605 "rw_mbytes_per_sec": 0, 00:07:16.605 "r_mbytes_per_sec": 0, 00:07:16.605 "w_mbytes_per_sec": 0 00:07:16.605 }, 00:07:16.605 "claimed": true, 00:07:16.605 "claim_type": "exclusive_write", 00:07:16.605 "zoned": false, 00:07:16.605 "supported_io_types": { 00:07:16.605 "read": true, 00:07:16.605 "write": true, 00:07:16.605 "unmap": true, 00:07:16.605 "flush": true, 00:07:16.605 "reset": true, 00:07:16.605 "nvme_admin": false, 00:07:16.605 "nvme_io": false, 00:07:16.605 "nvme_io_md": false, 00:07:16.605 "write_zeroes": true, 00:07:16.605 "zcopy": true, 00:07:16.605 "get_zone_info": false, 00:07:16.605 "zone_management": false, 00:07:16.605 "zone_append": false, 00:07:16.605 "compare": false, 00:07:16.605 "compare_and_write": false, 00:07:16.605 "abort": true, 00:07:16.605 "seek_hole": false, 00:07:16.605 "seek_data": false, 00:07:16.605 "copy": true, 00:07:16.605 "nvme_iov_md": false 00:07:16.605 }, 00:07:16.605 "memory_domains": [ 00:07:16.605 { 00:07:16.605 "dma_device_id": "system", 00:07:16.605 "dma_device_type": 1 00:07:16.605 }, 00:07:16.605 { 00:07:16.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.605 "dma_device_type": 2 00:07:16.605 } 00:07:16.605 ], 00:07:16.605 "driver_specific": {} 00:07:16.605 }' 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:16.605 18:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:16.896 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:16.896 "name": "BaseBdev2", 00:07:16.896 "aliases": [ 00:07:16.896 "00e1727d-42d7-11ef-9ade-d5fc5159efa5" 00:07:16.896 ], 00:07:16.896 "product_name": "Malloc disk", 00:07:16.896 "block_size": 512, 00:07:16.896 "num_blocks": 65536, 00:07:16.896 "uuid": "00e1727d-42d7-11ef-9ade-d5fc5159efa5", 00:07:16.896 "assigned_rate_limits": { 00:07:16.896 "rw_ios_per_sec": 0, 00:07:16.896 "rw_mbytes_per_sec": 0, 00:07:16.896 "r_mbytes_per_sec": 0, 00:07:16.896 "w_mbytes_per_sec": 0 00:07:16.896 }, 00:07:16.896 "claimed": true, 00:07:16.896 "claim_type": "exclusive_write", 00:07:16.896 "zoned": false, 00:07:16.896 "supported_io_types": { 00:07:16.896 "read": true, 00:07:16.896 "write": true, 00:07:16.896 "unmap": true, 00:07:16.896 "flush": true, 00:07:16.896 "reset": true, 00:07:16.896 "nvme_admin": false, 00:07:16.896 "nvme_io": false, 00:07:16.896 "nvme_io_md": false, 00:07:16.896 "write_zeroes": true, 00:07:16.896 "zcopy": true, 00:07:16.896 "get_zone_info": false, 00:07:16.896 "zone_management": false, 00:07:16.896 "zone_append": false, 00:07:16.896 "compare": false, 00:07:16.896 "compare_and_write": false, 00:07:16.896 "abort": true, 00:07:16.896 "seek_hole": false, 00:07:16.896 "seek_data": false, 00:07:16.896 "copy": true, 00:07:16.896 "nvme_iov_md": false 00:07:16.896 }, 00:07:16.896 "memory_domains": [ 00:07:16.896 { 00:07:16.896 "dma_device_id": "system", 00:07:16.896 "dma_device_type": 1 00:07:16.896 }, 00:07:16.896 { 00:07:16.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.896 "dma_device_type": 2 00:07:16.896 } 00:07:16.896 ], 00:07:16.896 "driver_specific": {} 00:07:16.896 }' 00:07:16.896 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:17.154 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:17.412 [2024-07-15 18:21:09.575435] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:17.412 [2024-07-15 18:21:09.575460] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.412 [2024-07-15 18:21:09.575475] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.412 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.668 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:17.668 "name": "Existed_Raid", 00:07:17.668 "uuid": "00e1793a-42d7-11ef-9ade-d5fc5159efa5", 00:07:17.668 "strip_size_kb": 64, 00:07:17.668 "state": "offline", 00:07:17.668 "raid_level": "raid0", 00:07:17.668 "superblock": false, 00:07:17.668 "num_base_bdevs": 2, 00:07:17.668 "num_base_bdevs_discovered": 1, 00:07:17.668 "num_base_bdevs_operational": 1, 00:07:17.668 "base_bdevs_list": [ 00:07:17.668 { 00:07:17.668 "name": null, 00:07:17.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.668 "is_configured": false, 00:07:17.668 "data_offset": 0, 00:07:17.668 "data_size": 65536 00:07:17.668 }, 00:07:17.668 { 00:07:17.668 "name": "BaseBdev2", 00:07:17.668 "uuid": "00e1727d-42d7-11ef-9ade-d5fc5159efa5", 00:07:17.668 "is_configured": true, 00:07:17.668 "data_offset": 0, 00:07:17.668 "data_size": 65536 00:07:17.668 } 00:07:17.668 ] 00:07:17.668 }' 00:07:17.668 18:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:17.668 18:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.926 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:17.926 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:17.926 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.926 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:18.184 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:18.184 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:18.185 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:18.443 [2024-07-15 18:21:10.749199] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:18.443 [2024-07-15 18:21:10.749249] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26ebab434a00 name Existed_Raid, state offline 00:07:18.443 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:18.443 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:18.443 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:18.443 18:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 48652 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 48652 ']' 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 48652 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 48652 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48652' 00:07:18.701 killing process with pid 48652 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 48652 00:07:18.701 [2024-07-15 18:21:11.057894] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.701 [2024-07-15 18:21:11.057929] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.701 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 48652 00:07:18.959 18:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:18.959 00:07:18.959 real 0m9.307s 00:07:18.959 user 0m15.980s 00:07:18.959 sys 0m1.840s 00:07:18.959 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.959 ************************************ 00:07:18.959 END TEST raid_state_function_test 00:07:18.959 ************************************ 00:07:18.959 18:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.959 18:21:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:18.959 18:21:11 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:18.959 18:21:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:18.959 18:21:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.959 18:21:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.219 ************************************ 00:07:19.219 START TEST raid_state_function_test_sb 00:07:19.219 ************************************ 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=48927 00:07:19.219 Process raid pid: 48927 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48927' 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 48927 /var/tmp/spdk-raid.sock 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 48927 ']' 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.219 18:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.219 [2024-07-15 18:21:11.335522] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:19.219 [2024-07-15 18:21:11.335765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:19.820 EAL: TSC is not safe to use in SMP mode 00:07:19.820 EAL: TSC is not invariant 00:07:19.820 [2024-07-15 18:21:11.938230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.820 [2024-07-15 18:21:12.049662] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:19.820 [2024-07-15 18:21:12.051852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.820 [2024-07-15 18:21:12.052742] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.820 [2024-07-15 18:21:12.052758] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.080 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.080 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:07:20.080 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:20.648 [2024-07-15 18:21:12.709398] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.648 [2024-07-15 18:21:12.709465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.648 [2024-07-15 18:21:12.709471] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.648 [2024-07-15 18:21:12.709481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:20.648 "name": "Existed_Raid", 00:07:20.648 "uuid": "042ada8a-42d7-11ef-9ade-d5fc5159efa5", 00:07:20.648 "strip_size_kb": 64, 00:07:20.648 "state": "configuring", 00:07:20.648 "raid_level": "raid0", 00:07:20.648 "superblock": true, 00:07:20.648 "num_base_bdevs": 2, 00:07:20.648 "num_base_bdevs_discovered": 0, 00:07:20.648 "num_base_bdevs_operational": 2, 00:07:20.648 "base_bdevs_list": [ 00:07:20.648 { 00:07:20.648 "name": "BaseBdev1", 00:07:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.648 "is_configured": false, 00:07:20.648 "data_offset": 0, 00:07:20.648 "data_size": 0 00:07:20.648 }, 00:07:20.648 { 00:07:20.648 "name": "BaseBdev2", 00:07:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.648 "is_configured": false, 00:07:20.648 "data_offset": 0, 00:07:20.648 "data_size": 0 00:07:20.648 } 00:07:20.648 ] 00:07:20.648 }' 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:20.648 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.215 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:21.215 [2024-07-15 18:21:13.505393] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.215 [2024-07-15 18:21:13.505450] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28f106434500 name Existed_Raid, state configuring 00:07:21.215 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:21.473 [2024-07-15 18:21:13.781448] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.473 [2024-07-15 18:21:13.781520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.473 [2024-07-15 18:21:13.781526] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.473 [2024-07-15 18:21:13.781535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.473 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.732 [2024-07-15 18:21:14.070507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.732 BaseBdev1 00:07:21.732 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:21.732 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:21.732 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:21.732 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:21.732 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:21.732 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:21.732 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.300 [ 00:07:22.300 { 00:07:22.300 "name": "BaseBdev1", 00:07:22.300 "aliases": [ 00:07:22.300 "04fa629c-42d7-11ef-9ade-d5fc5159efa5" 00:07:22.300 ], 00:07:22.300 "product_name": "Malloc disk", 00:07:22.300 "block_size": 512, 00:07:22.300 "num_blocks": 65536, 00:07:22.300 "uuid": "04fa629c-42d7-11ef-9ade-d5fc5159efa5", 00:07:22.300 "assigned_rate_limits": { 00:07:22.300 "rw_ios_per_sec": 0, 00:07:22.300 "rw_mbytes_per_sec": 0, 00:07:22.300 "r_mbytes_per_sec": 0, 00:07:22.300 "w_mbytes_per_sec": 0 00:07:22.300 }, 00:07:22.300 "claimed": true, 00:07:22.300 "claim_type": "exclusive_write", 00:07:22.300 "zoned": false, 00:07:22.300 "supported_io_types": { 00:07:22.300 "read": true, 00:07:22.300 "write": true, 00:07:22.300 "unmap": true, 00:07:22.300 "flush": true, 00:07:22.300 "reset": true, 00:07:22.300 "nvme_admin": false, 00:07:22.300 "nvme_io": false, 00:07:22.300 "nvme_io_md": false, 00:07:22.300 "write_zeroes": true, 00:07:22.300 "zcopy": true, 00:07:22.300 "get_zone_info": false, 00:07:22.300 "zone_management": false, 00:07:22.300 "zone_append": false, 00:07:22.300 "compare": false, 00:07:22.300 "compare_and_write": false, 00:07:22.300 "abort": true, 00:07:22.300 "seek_hole": false, 00:07:22.300 "seek_data": false, 00:07:22.300 "copy": true, 00:07:22.300 "nvme_iov_md": false 00:07:22.300 }, 00:07:22.300 "memory_domains": [ 00:07:22.300 { 00:07:22.300 "dma_device_id": "system", 00:07:22.300 "dma_device_type": 1 00:07:22.300 }, 00:07:22.300 { 00:07:22.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.300 "dma_device_type": 2 00:07:22.300 } 00:07:22.300 ], 00:07:22.300 "driver_specific": {} 00:07:22.300 } 00:07:22.300 ] 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:22.300 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:22.301 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.301 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.559 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:22.559 "name": "Existed_Raid", 00:07:22.559 "uuid": "04ce6f95-42d7-11ef-9ade-d5fc5159efa5", 00:07:22.559 "strip_size_kb": 64, 00:07:22.559 "state": "configuring", 00:07:22.559 "raid_level": "raid0", 00:07:22.559 "superblock": true, 00:07:22.559 "num_base_bdevs": 2, 00:07:22.559 "num_base_bdevs_discovered": 1, 00:07:22.559 "num_base_bdevs_operational": 2, 00:07:22.559 "base_bdevs_list": [ 00:07:22.559 { 00:07:22.559 "name": "BaseBdev1", 00:07:22.559 "uuid": "04fa629c-42d7-11ef-9ade-d5fc5159efa5", 00:07:22.559 "is_configured": true, 00:07:22.559 "data_offset": 2048, 00:07:22.559 "data_size": 63488 00:07:22.559 }, 00:07:22.559 { 00:07:22.559 "name": "BaseBdev2", 00:07:22.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.559 "is_configured": false, 00:07:22.559 "data_offset": 0, 00:07:22.559 "data_size": 0 00:07:22.559 } 00:07:22.559 ] 00:07:22.559 }' 00:07:22.559 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:22.559 18:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.126 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:23.126 [2024-07-15 18:21:15.469511] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.126 [2024-07-15 18:21:15.469547] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28f106434500 name Existed_Raid, state configuring 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:23.386 [2024-07-15 18:21:15.705504] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.386 [2024-07-15 18:21:15.706337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.386 [2024-07-15 18:21:15.706376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.386 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.644 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:23.644 "name": "Existed_Raid", 00:07:23.644 "uuid": "05f405de-42d7-11ef-9ade-d5fc5159efa5", 00:07:23.644 "strip_size_kb": 64, 00:07:23.644 "state": "configuring", 00:07:23.644 "raid_level": "raid0", 00:07:23.644 "superblock": true, 00:07:23.644 "num_base_bdevs": 2, 00:07:23.644 "num_base_bdevs_discovered": 1, 00:07:23.644 "num_base_bdevs_operational": 2, 00:07:23.644 "base_bdevs_list": [ 00:07:23.644 { 00:07:23.644 "name": "BaseBdev1", 00:07:23.644 "uuid": "04fa629c-42d7-11ef-9ade-d5fc5159efa5", 00:07:23.644 "is_configured": true, 00:07:23.644 "data_offset": 2048, 00:07:23.644 "data_size": 63488 00:07:23.644 }, 00:07:23.644 { 00:07:23.644 "name": "BaseBdev2", 00:07:23.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.644 "is_configured": false, 00:07:23.644 "data_offset": 0, 00:07:23.644 "data_size": 0 00:07:23.644 } 00:07:23.644 ] 00:07:23.644 }' 00:07:23.644 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:23.644 18:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.212 18:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.213 [2024-07-15 18:21:16.529663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.213 [2024-07-15 18:21:16.529759] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x28f106434a00 00:07:24.213 [2024-07-15 18:21:16.529766] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.213 [2024-07-15 18:21:16.529788] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28f106497e20 00:07:24.213 [2024-07-15 18:21:16.529835] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28f106434a00 00:07:24.213 [2024-07-15 18:21:16.529839] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x28f106434a00 00:07:24.213 [2024-07-15 18:21:16.529860] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.213 BaseBdev2 00:07:24.213 18:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:24.213 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:24.213 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:24.213 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:24.213 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:24.213 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:24.213 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:24.471 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.730 [ 00:07:24.730 { 00:07:24.730 "name": "BaseBdev2", 00:07:24.730 "aliases": [ 00:07:24.730 "0671c27f-42d7-11ef-9ade-d5fc5159efa5" 00:07:24.730 ], 00:07:24.730 "product_name": "Malloc disk", 00:07:24.730 "block_size": 512, 00:07:24.730 "num_blocks": 65536, 00:07:24.730 "uuid": "0671c27f-42d7-11ef-9ade-d5fc5159efa5", 00:07:24.730 "assigned_rate_limits": { 00:07:24.730 "rw_ios_per_sec": 0, 00:07:24.730 "rw_mbytes_per_sec": 0, 00:07:24.730 "r_mbytes_per_sec": 0, 00:07:24.730 "w_mbytes_per_sec": 0 00:07:24.730 }, 00:07:24.730 "claimed": true, 00:07:24.730 "claim_type": "exclusive_write", 00:07:24.730 "zoned": false, 00:07:24.730 "supported_io_types": { 00:07:24.730 "read": true, 00:07:24.730 "write": true, 00:07:24.730 "unmap": true, 00:07:24.730 "flush": true, 00:07:24.730 "reset": true, 00:07:24.730 "nvme_admin": false, 00:07:24.730 "nvme_io": false, 00:07:24.730 "nvme_io_md": false, 00:07:24.730 "write_zeroes": true, 00:07:24.730 "zcopy": true, 00:07:24.730 "get_zone_info": false, 00:07:24.730 "zone_management": false, 00:07:24.730 "zone_append": false, 00:07:24.730 "compare": false, 00:07:24.730 "compare_and_write": false, 00:07:24.730 "abort": true, 00:07:24.730 "seek_hole": false, 00:07:24.730 "seek_data": false, 00:07:24.730 "copy": true, 00:07:24.730 "nvme_iov_md": false 00:07:24.730 }, 00:07:24.730 "memory_domains": [ 00:07:24.730 { 00:07:24.730 "dma_device_id": "system", 00:07:24.730 "dma_device_type": 1 00:07:24.730 }, 00:07:24.730 { 00:07:24.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.730 "dma_device_type": 2 00:07:24.730 } 00:07:24.730 ], 00:07:24.730 "driver_specific": {} 00:07:24.730 } 00:07:24.730 ] 00:07:24.730 18:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:24.730 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:24.730 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:24.730 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.731 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.989 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:24.989 "name": "Existed_Raid", 00:07:24.989 "uuid": "05f405de-42d7-11ef-9ade-d5fc5159efa5", 00:07:24.989 "strip_size_kb": 64, 00:07:24.989 "state": "online", 00:07:24.989 "raid_level": "raid0", 00:07:24.989 "superblock": true, 00:07:24.989 "num_base_bdevs": 2, 00:07:24.989 "num_base_bdevs_discovered": 2, 00:07:24.989 "num_base_bdevs_operational": 2, 00:07:24.989 "base_bdevs_list": [ 00:07:24.989 { 00:07:24.989 "name": "BaseBdev1", 00:07:24.989 "uuid": "04fa629c-42d7-11ef-9ade-d5fc5159efa5", 00:07:24.989 "is_configured": true, 00:07:24.989 "data_offset": 2048, 00:07:24.989 "data_size": 63488 00:07:24.989 }, 00:07:24.989 { 00:07:24.989 "name": "BaseBdev2", 00:07:24.989 "uuid": "0671c27f-42d7-11ef-9ade-d5fc5159efa5", 00:07:24.989 "is_configured": true, 00:07:24.989 "data_offset": 2048, 00:07:24.989 "data_size": 63488 00:07:24.989 } 00:07:24.989 ] 00:07:24.989 }' 00:07:24.989 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:24.989 18:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:25.557 [2024-07-15 18:21:17.853593] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:25.557 "name": "Existed_Raid", 00:07:25.557 "aliases": [ 00:07:25.557 "05f405de-42d7-11ef-9ade-d5fc5159efa5" 00:07:25.557 ], 00:07:25.557 "product_name": "Raid Volume", 00:07:25.557 "block_size": 512, 00:07:25.557 "num_blocks": 126976, 00:07:25.557 "uuid": "05f405de-42d7-11ef-9ade-d5fc5159efa5", 00:07:25.557 "assigned_rate_limits": { 00:07:25.557 "rw_ios_per_sec": 0, 00:07:25.557 "rw_mbytes_per_sec": 0, 00:07:25.557 "r_mbytes_per_sec": 0, 00:07:25.557 "w_mbytes_per_sec": 0 00:07:25.557 }, 00:07:25.557 "claimed": false, 00:07:25.557 "zoned": false, 00:07:25.557 "supported_io_types": { 00:07:25.557 "read": true, 00:07:25.557 "write": true, 00:07:25.557 "unmap": true, 00:07:25.557 "flush": true, 00:07:25.557 "reset": true, 00:07:25.557 "nvme_admin": false, 00:07:25.557 "nvme_io": false, 00:07:25.557 "nvme_io_md": false, 00:07:25.557 "write_zeroes": true, 00:07:25.557 "zcopy": false, 00:07:25.557 "get_zone_info": false, 00:07:25.557 "zone_management": false, 00:07:25.557 "zone_append": false, 00:07:25.557 "compare": false, 00:07:25.557 "compare_and_write": false, 00:07:25.557 "abort": false, 00:07:25.557 "seek_hole": false, 00:07:25.557 "seek_data": false, 00:07:25.557 "copy": false, 00:07:25.557 "nvme_iov_md": false 00:07:25.557 }, 00:07:25.557 "memory_domains": [ 00:07:25.557 { 00:07:25.557 "dma_device_id": "system", 00:07:25.557 "dma_device_type": 1 00:07:25.557 }, 00:07:25.557 { 00:07:25.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.557 "dma_device_type": 2 00:07:25.557 }, 00:07:25.557 { 00:07:25.557 "dma_device_id": "system", 00:07:25.557 "dma_device_type": 1 00:07:25.557 }, 00:07:25.557 { 00:07:25.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.557 "dma_device_type": 2 00:07:25.557 } 00:07:25.557 ], 00:07:25.557 "driver_specific": { 00:07:25.557 "raid": { 00:07:25.557 "uuid": "05f405de-42d7-11ef-9ade-d5fc5159efa5", 00:07:25.557 "strip_size_kb": 64, 00:07:25.557 "state": "online", 00:07:25.557 "raid_level": "raid0", 00:07:25.557 "superblock": true, 00:07:25.557 "num_base_bdevs": 2, 00:07:25.557 "num_base_bdevs_discovered": 2, 00:07:25.557 "num_base_bdevs_operational": 2, 00:07:25.557 "base_bdevs_list": [ 00:07:25.557 { 00:07:25.557 "name": "BaseBdev1", 00:07:25.557 "uuid": "04fa629c-42d7-11ef-9ade-d5fc5159efa5", 00:07:25.557 "is_configured": true, 00:07:25.557 "data_offset": 2048, 00:07:25.557 "data_size": 63488 00:07:25.557 }, 00:07:25.557 { 00:07:25.557 "name": "BaseBdev2", 00:07:25.557 "uuid": "0671c27f-42d7-11ef-9ade-d5fc5159efa5", 00:07:25.557 "is_configured": true, 00:07:25.557 "data_offset": 2048, 00:07:25.557 "data_size": 63488 00:07:25.557 } 00:07:25.557 ] 00:07:25.557 } 00:07:25.557 } 00:07:25.557 }' 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.557 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:25.557 BaseBdev2' 00:07:25.558 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:25.558 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:25.558 18:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:25.817 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:25.817 "name": "BaseBdev1", 00:07:25.817 "aliases": [ 00:07:25.817 "04fa629c-42d7-11ef-9ade-d5fc5159efa5" 00:07:25.817 ], 00:07:25.817 "product_name": "Malloc disk", 00:07:25.817 "block_size": 512, 00:07:25.817 "num_blocks": 65536, 00:07:25.817 "uuid": "04fa629c-42d7-11ef-9ade-d5fc5159efa5", 00:07:25.817 "assigned_rate_limits": { 00:07:25.817 "rw_ios_per_sec": 0, 00:07:25.817 "rw_mbytes_per_sec": 0, 00:07:25.817 "r_mbytes_per_sec": 0, 00:07:25.817 "w_mbytes_per_sec": 0 00:07:25.817 }, 00:07:25.817 "claimed": true, 00:07:25.817 "claim_type": "exclusive_write", 00:07:25.817 "zoned": false, 00:07:25.817 "supported_io_types": { 00:07:25.817 "read": true, 00:07:25.817 "write": true, 00:07:25.817 "unmap": true, 00:07:25.817 "flush": true, 00:07:25.817 "reset": true, 00:07:25.817 "nvme_admin": false, 00:07:25.817 "nvme_io": false, 00:07:25.817 "nvme_io_md": false, 00:07:25.817 "write_zeroes": true, 00:07:25.817 "zcopy": true, 00:07:25.817 "get_zone_info": false, 00:07:25.817 "zone_management": false, 00:07:25.817 "zone_append": false, 00:07:25.817 "compare": false, 00:07:25.817 "compare_and_write": false, 00:07:25.817 "abort": true, 00:07:25.817 "seek_hole": false, 00:07:25.817 "seek_data": false, 00:07:25.817 "copy": true, 00:07:25.818 "nvme_iov_md": false 00:07:25.818 }, 00:07:25.818 "memory_domains": [ 00:07:25.818 { 00:07:25.818 "dma_device_id": "system", 00:07:25.818 "dma_device_type": 1 00:07:25.818 }, 00:07:25.818 { 00:07:25.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.818 "dma_device_type": 2 00:07:25.818 } 00:07:25.818 ], 00:07:25.818 "driver_specific": {} 00:07:25.818 }' 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:25.818 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:26.076 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:26.076 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:26.077 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:26.077 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:26.077 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:26.335 "name": "BaseBdev2", 00:07:26.335 "aliases": [ 00:07:26.335 "0671c27f-42d7-11ef-9ade-d5fc5159efa5" 00:07:26.335 ], 00:07:26.335 "product_name": "Malloc disk", 00:07:26.335 "block_size": 512, 00:07:26.335 "num_blocks": 65536, 00:07:26.335 "uuid": "0671c27f-42d7-11ef-9ade-d5fc5159efa5", 00:07:26.335 "assigned_rate_limits": { 00:07:26.335 "rw_ios_per_sec": 0, 00:07:26.335 "rw_mbytes_per_sec": 0, 00:07:26.335 "r_mbytes_per_sec": 0, 00:07:26.335 "w_mbytes_per_sec": 0 00:07:26.335 }, 00:07:26.335 "claimed": true, 00:07:26.335 "claim_type": "exclusive_write", 00:07:26.335 "zoned": false, 00:07:26.335 "supported_io_types": { 00:07:26.335 "read": true, 00:07:26.335 "write": true, 00:07:26.335 "unmap": true, 00:07:26.335 "flush": true, 00:07:26.335 "reset": true, 00:07:26.335 "nvme_admin": false, 00:07:26.335 "nvme_io": false, 00:07:26.335 "nvme_io_md": false, 00:07:26.335 "write_zeroes": true, 00:07:26.335 "zcopy": true, 00:07:26.335 "get_zone_info": false, 00:07:26.335 "zone_management": false, 00:07:26.335 "zone_append": false, 00:07:26.335 "compare": false, 00:07:26.335 "compare_and_write": false, 00:07:26.335 "abort": true, 00:07:26.335 "seek_hole": false, 00:07:26.335 "seek_data": false, 00:07:26.335 "copy": true, 00:07:26.335 "nvme_iov_md": false 00:07:26.335 }, 00:07:26.335 "memory_domains": [ 00:07:26.335 { 00:07:26.335 "dma_device_id": "system", 00:07:26.335 "dma_device_type": 1 00:07:26.335 }, 00:07:26.335 { 00:07:26.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.335 "dma_device_type": 2 00:07:26.335 } 00:07:26.335 ], 00:07:26.335 "driver_specific": {} 00:07:26.335 }' 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:26.335 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:26.594 [2024-07-15 18:21:18.865597] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.594 [2024-07-15 18:21:18.865626] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.594 [2024-07-15 18:21:18.865641] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:26.595 18:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.853 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:26.853 "name": "Existed_Raid", 00:07:26.853 "uuid": "05f405de-42d7-11ef-9ade-d5fc5159efa5", 00:07:26.853 "strip_size_kb": 64, 00:07:26.853 "state": "offline", 00:07:26.853 "raid_level": "raid0", 00:07:26.853 "superblock": true, 00:07:26.853 "num_base_bdevs": 2, 00:07:26.853 "num_base_bdevs_discovered": 1, 00:07:26.853 "num_base_bdevs_operational": 1, 00:07:26.853 "base_bdevs_list": [ 00:07:26.853 { 00:07:26.853 "name": null, 00:07:26.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.853 "is_configured": false, 00:07:26.853 "data_offset": 2048, 00:07:26.853 "data_size": 63488 00:07:26.853 }, 00:07:26.853 { 00:07:26.853 "name": "BaseBdev2", 00:07:26.853 "uuid": "0671c27f-42d7-11ef-9ade-d5fc5159efa5", 00:07:26.853 "is_configured": true, 00:07:26.853 "data_offset": 2048, 00:07:26.853 "data_size": 63488 00:07:26.853 } 00:07:26.853 ] 00:07:26.853 }' 00:07:26.853 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:26.853 18:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.420 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:27.420 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:27.420 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:27.420 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:27.420 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:27.420 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.420 18:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:27.986 [2024-07-15 18:21:20.043849] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.986 [2024-07-15 18:21:20.043884] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28f106434a00 name Existed_Raid, state offline 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 48927 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 48927 ']' 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 48927 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 48927 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48927' 00:07:27.986 killing process with pid 48927 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 48927 00:07:27.986 [2024-07-15 18:21:20.317160] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.986 [2024-07-15 18:21:20.317194] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.986 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 48927 00:07:28.244 18:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:28.244 00:07:28.244 real 0m9.215s 00:07:28.244 user 0m15.923s 00:07:28.244 sys 0m1.699s 00:07:28.244 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.244 18:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.244 ************************************ 00:07:28.244 END TEST raid_state_function_test_sb 00:07:28.244 ************************************ 00:07:28.244 18:21:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:28.244 18:21:20 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:28.244 18:21:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.244 18:21:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.244 18:21:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.244 ************************************ 00:07:28.244 START TEST raid_superblock_test 00:07:28.244 ************************************ 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49197 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49197 /var/tmp/spdk-raid.sock 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 49197 ']' 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.244 18:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.244 [2024-07-15 18:21:20.593037] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:28.244 [2024-07-15 18:21:20.593207] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:29.178 EAL: TSC is not safe to use in SMP mode 00:07:29.178 EAL: TSC is not invariant 00:07:29.178 [2024-07-15 18:21:21.211549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.178 [2024-07-15 18:21:21.322860] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:29.178 [2024-07-15 18:21:21.325182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.178 [2024-07-15 18:21:21.325993] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.178 [2024-07-15 18:21:21.326009] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.436 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:29.695 malloc1 00:07:29.695 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:29.953 [2024-07-15 18:21:22.195323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:29.953 [2024-07-15 18:21:22.195403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.953 [2024-07-15 18:21:22.195416] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c9100634780 00:07:29.953 [2024-07-15 18:21:22.195424] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.953 [2024-07-15 18:21:22.196333] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.953 [2024-07-15 18:21:22.196359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:29.954 pt1 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.954 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:30.212 malloc2 00:07:30.212 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.471 [2024-07-15 18:21:22.791687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.471 [2024-07-15 18:21:22.791784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.471 [2024-07-15 18:21:22.791796] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c9100634c80 00:07:30.471 [2024-07-15 18:21:22.791805] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.471 [2024-07-15 18:21:22.792468] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.471 [2024-07-15 18:21:22.792493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.471 pt2 00:07:30.471 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:30.471 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:30.471 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:30.730 [2024-07-15 18:21:23.067860] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.730 [2024-07-15 18:21:23.068441] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.730 [2024-07-15 18:21:23.068500] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2c9100634f00 00:07:30.730 [2024-07-15 18:21:23.068506] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.730 [2024-07-15 18:21:23.068539] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2c9100697e20 00:07:30.730 [2024-07-15 18:21:23.068619] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2c9100634f00 00:07:30.730 [2024-07-15 18:21:23.068624] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2c9100634f00 00:07:30.730 [2024-07-15 18:21:23.068651] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.730 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.995 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:30.995 "name": "raid_bdev1", 00:07:30.995 "uuid": "0a576de3-42d7-11ef-9ade-d5fc5159efa5", 00:07:30.995 "strip_size_kb": 64, 00:07:30.995 "state": "online", 00:07:30.995 "raid_level": "raid0", 00:07:30.995 "superblock": true, 00:07:30.995 "num_base_bdevs": 2, 00:07:30.995 "num_base_bdevs_discovered": 2, 00:07:30.995 "num_base_bdevs_operational": 2, 00:07:30.995 "base_bdevs_list": [ 00:07:30.995 { 00:07:30.995 "name": "pt1", 00:07:30.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.995 "is_configured": true, 00:07:30.995 "data_offset": 2048, 00:07:30.995 "data_size": 63488 00:07:30.995 }, 00:07:30.995 { 00:07:30.995 "name": "pt2", 00:07:30.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.995 "is_configured": true, 00:07:30.995 "data_offset": 2048, 00:07:30.995 "data_size": 63488 00:07:30.995 } 00:07:30.995 ] 00:07:30.995 }' 00:07:30.995 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:30.995 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.563 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:31.563 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:31.564 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:31.564 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:31.564 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:31.564 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:31.564 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:31.564 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:31.564 [2024-07-15 18:21:23.900361] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.564 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:31.564 "name": "raid_bdev1", 00:07:31.564 "aliases": [ 00:07:31.564 "0a576de3-42d7-11ef-9ade-d5fc5159efa5" 00:07:31.564 ], 00:07:31.564 "product_name": "Raid Volume", 00:07:31.564 "block_size": 512, 00:07:31.564 "num_blocks": 126976, 00:07:31.564 "uuid": "0a576de3-42d7-11ef-9ade-d5fc5159efa5", 00:07:31.564 "assigned_rate_limits": { 00:07:31.564 "rw_ios_per_sec": 0, 00:07:31.564 "rw_mbytes_per_sec": 0, 00:07:31.564 "r_mbytes_per_sec": 0, 00:07:31.564 "w_mbytes_per_sec": 0 00:07:31.564 }, 00:07:31.564 "claimed": false, 00:07:31.564 "zoned": false, 00:07:31.564 "supported_io_types": { 00:07:31.564 "read": true, 00:07:31.564 "write": true, 00:07:31.564 "unmap": true, 00:07:31.564 "flush": true, 00:07:31.564 "reset": true, 00:07:31.564 "nvme_admin": false, 00:07:31.564 "nvme_io": false, 00:07:31.564 "nvme_io_md": false, 00:07:31.564 "write_zeroes": true, 00:07:31.564 "zcopy": false, 00:07:31.564 "get_zone_info": false, 00:07:31.564 "zone_management": false, 00:07:31.564 "zone_append": false, 00:07:31.564 "compare": false, 00:07:31.564 "compare_and_write": false, 00:07:31.564 "abort": false, 00:07:31.564 "seek_hole": false, 00:07:31.564 "seek_data": false, 00:07:31.564 "copy": false, 00:07:31.564 "nvme_iov_md": false 00:07:31.564 }, 00:07:31.564 "memory_domains": [ 00:07:31.564 { 00:07:31.564 "dma_device_id": "system", 00:07:31.564 "dma_device_type": 1 00:07:31.564 }, 00:07:31.564 { 00:07:31.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.564 "dma_device_type": 2 00:07:31.564 }, 00:07:31.564 { 00:07:31.564 "dma_device_id": "system", 00:07:31.564 "dma_device_type": 1 00:07:31.564 }, 00:07:31.564 { 00:07:31.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.564 "dma_device_type": 2 00:07:31.564 } 00:07:31.564 ], 00:07:31.564 "driver_specific": { 00:07:31.564 "raid": { 00:07:31.564 "uuid": "0a576de3-42d7-11ef-9ade-d5fc5159efa5", 00:07:31.564 "strip_size_kb": 64, 00:07:31.564 "state": "online", 00:07:31.564 "raid_level": "raid0", 00:07:31.564 "superblock": true, 00:07:31.564 "num_base_bdevs": 2, 00:07:31.564 "num_base_bdevs_discovered": 2, 00:07:31.564 "num_base_bdevs_operational": 2, 00:07:31.564 "base_bdevs_list": [ 00:07:31.564 { 00:07:31.564 "name": "pt1", 00:07:31.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.564 "is_configured": true, 00:07:31.564 "data_offset": 2048, 00:07:31.564 "data_size": 63488 00:07:31.564 }, 00:07:31.564 { 00:07:31.564 "name": "pt2", 00:07:31.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.564 "is_configured": true, 00:07:31.564 "data_offset": 2048, 00:07:31.564 "data_size": 63488 00:07:31.564 } 00:07:31.564 ] 00:07:31.564 } 00:07:31.564 } 00:07:31.564 }' 00:07:31.564 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.822 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:31.822 pt2' 00:07:31.822 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:31.822 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:31.822 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:32.079 "name": "pt1", 00:07:32.079 "aliases": [ 00:07:32.079 "00000000-0000-0000-0000-000000000001" 00:07:32.079 ], 00:07:32.079 "product_name": "passthru", 00:07:32.079 "block_size": 512, 00:07:32.079 "num_blocks": 65536, 00:07:32.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.079 "assigned_rate_limits": { 00:07:32.079 "rw_ios_per_sec": 0, 00:07:32.079 "rw_mbytes_per_sec": 0, 00:07:32.079 "r_mbytes_per_sec": 0, 00:07:32.079 "w_mbytes_per_sec": 0 00:07:32.079 }, 00:07:32.079 "claimed": true, 00:07:32.079 "claim_type": "exclusive_write", 00:07:32.079 "zoned": false, 00:07:32.079 "supported_io_types": { 00:07:32.079 "read": true, 00:07:32.079 "write": true, 00:07:32.079 "unmap": true, 00:07:32.079 "flush": true, 00:07:32.079 "reset": true, 00:07:32.079 "nvme_admin": false, 00:07:32.079 "nvme_io": false, 00:07:32.079 "nvme_io_md": false, 00:07:32.079 "write_zeroes": true, 00:07:32.079 "zcopy": true, 00:07:32.079 "get_zone_info": false, 00:07:32.079 "zone_management": false, 00:07:32.079 "zone_append": false, 00:07:32.079 "compare": false, 00:07:32.079 "compare_and_write": false, 00:07:32.079 "abort": true, 00:07:32.079 "seek_hole": false, 00:07:32.079 "seek_data": false, 00:07:32.079 "copy": true, 00:07:32.079 "nvme_iov_md": false 00:07:32.079 }, 00:07:32.079 "memory_domains": [ 00:07:32.079 { 00:07:32.079 "dma_device_id": "system", 00:07:32.079 "dma_device_type": 1 00:07:32.079 }, 00:07:32.079 { 00:07:32.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.079 "dma_device_type": 2 00:07:32.079 } 00:07:32.079 ], 00:07:32.079 "driver_specific": { 00:07:32.079 "passthru": { 00:07:32.079 "name": "pt1", 00:07:32.079 "base_bdev_name": "malloc1" 00:07:32.079 } 00:07:32.079 } 00:07:32.079 }' 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:32.079 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:32.367 "name": "pt2", 00:07:32.367 "aliases": [ 00:07:32.367 "00000000-0000-0000-0000-000000000002" 00:07:32.367 ], 00:07:32.367 "product_name": "passthru", 00:07:32.367 "block_size": 512, 00:07:32.367 "num_blocks": 65536, 00:07:32.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.367 "assigned_rate_limits": { 00:07:32.367 "rw_ios_per_sec": 0, 00:07:32.367 "rw_mbytes_per_sec": 0, 00:07:32.367 "r_mbytes_per_sec": 0, 00:07:32.367 "w_mbytes_per_sec": 0 00:07:32.367 }, 00:07:32.367 "claimed": true, 00:07:32.367 "claim_type": "exclusive_write", 00:07:32.367 "zoned": false, 00:07:32.367 "supported_io_types": { 00:07:32.367 "read": true, 00:07:32.367 "write": true, 00:07:32.367 "unmap": true, 00:07:32.367 "flush": true, 00:07:32.367 "reset": true, 00:07:32.367 "nvme_admin": false, 00:07:32.367 "nvme_io": false, 00:07:32.367 "nvme_io_md": false, 00:07:32.367 "write_zeroes": true, 00:07:32.367 "zcopy": true, 00:07:32.367 "get_zone_info": false, 00:07:32.367 "zone_management": false, 00:07:32.367 "zone_append": false, 00:07:32.367 "compare": false, 00:07:32.367 "compare_and_write": false, 00:07:32.367 "abort": true, 00:07:32.367 "seek_hole": false, 00:07:32.367 "seek_data": false, 00:07:32.367 "copy": true, 00:07:32.367 "nvme_iov_md": false 00:07:32.367 }, 00:07:32.367 "memory_domains": [ 00:07:32.367 { 00:07:32.367 "dma_device_id": "system", 00:07:32.367 "dma_device_type": 1 00:07:32.367 }, 00:07:32.367 { 00:07:32.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.367 "dma_device_type": 2 00:07:32.367 } 00:07:32.367 ], 00:07:32.367 "driver_specific": { 00:07:32.367 "passthru": { 00:07:32.367 "name": "pt2", 00:07:32.367 "base_bdev_name": "malloc2" 00:07:32.367 } 00:07:32.367 } 00:07:32.367 }' 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:32.367 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:32.634 [2024-07-15 18:21:24.868965] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.634 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0a576de3-42d7-11ef-9ade-d5fc5159efa5 00:07:32.634 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0a576de3-42d7-11ef-9ade-d5fc5159efa5 ']' 00:07:32.634 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:32.892 [2024-07-15 18:21:25.149051] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.892 [2024-07-15 18:21:25.149077] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.892 [2024-07-15 18:21:25.149100] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.892 [2024-07-15 18:21:25.149112] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.892 [2024-07-15 18:21:25.149116] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2c9100634f00 name raid_bdev1, state offline 00:07:32.892 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.892 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:07:33.150 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:07:33.150 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:07:33.150 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.150 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:33.409 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.409 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:33.667 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:33.667 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:33.925 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:34.182 [2024-07-15 18:21:26.521859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:34.182 [2024-07-15 18:21:26.522476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:34.182 [2024-07-15 18:21:26.522500] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:34.182 [2024-07-15 18:21:26.522537] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:34.182 [2024-07-15 18:21:26.522549] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.182 [2024-07-15 18:21:26.522553] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2c9100634c80 name raid_bdev1, state configuring 00:07:34.182 request: 00:07:34.182 { 00:07:34.182 "name": "raid_bdev1", 00:07:34.182 "raid_level": "raid0", 00:07:34.182 "base_bdevs": [ 00:07:34.182 "malloc1", 00:07:34.182 "malloc2" 00:07:34.182 ], 00:07:34.182 "strip_size_kb": 64, 00:07:34.182 "superblock": false, 00:07:34.182 "method": "bdev_raid_create", 00:07:34.182 "req_id": 1 00:07:34.182 } 00:07:34.182 Got JSON-RPC error response 00:07:34.182 response: 00:07:34.182 { 00:07:34.182 "code": -17, 00:07:34.182 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:34.182 } 00:07:34.182 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:07:34.182 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.182 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.182 18:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.440 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:07:34.440 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:34.699 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:07:34.699 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:07:34.699 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:34.957 [2024-07-15 18:21:27.106184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:34.957 [2024-07-15 18:21:27.106241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.957 [2024-07-15 18:21:27.106254] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c9100634780 00:07:34.957 [2024-07-15 18:21:27.106262] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.957 [2024-07-15 18:21:27.106900] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.957 [2024-07-15 18:21:27.106926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:34.957 [2024-07-15 18:21:27.106956] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:34.957 [2024-07-15 18:21:27.106969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:34.957 pt1 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:34.957 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.215 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:35.215 "name": "raid_bdev1", 00:07:35.215 "uuid": "0a576de3-42d7-11ef-9ade-d5fc5159efa5", 00:07:35.215 "strip_size_kb": 64, 00:07:35.215 "state": "configuring", 00:07:35.215 "raid_level": "raid0", 00:07:35.215 "superblock": true, 00:07:35.215 "num_base_bdevs": 2, 00:07:35.215 "num_base_bdevs_discovered": 1, 00:07:35.215 "num_base_bdevs_operational": 2, 00:07:35.215 "base_bdevs_list": [ 00:07:35.215 { 00:07:35.215 "name": "pt1", 00:07:35.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:35.215 "is_configured": true, 00:07:35.215 "data_offset": 2048, 00:07:35.215 "data_size": 63488 00:07:35.215 }, 00:07:35.215 { 00:07:35.215 "name": null, 00:07:35.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:35.215 "is_configured": false, 00:07:35.215 "data_offset": 2048, 00:07:35.215 "data_size": 63488 00:07:35.215 } 00:07:35.215 ] 00:07:35.215 }' 00:07:35.215 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:35.215 18:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.473 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:35.473 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:35.473 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:35.473 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:35.731 [2024-07-15 18:21:28.018748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:35.731 [2024-07-15 18:21:28.018810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.731 [2024-07-15 18:21:28.018823] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c9100634f00 00:07:35.731 [2024-07-15 18:21:28.018831] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.731 [2024-07-15 18:21:28.018947] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.731 [2024-07-15 18:21:28.018958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:35.731 [2024-07-15 18:21:28.018982] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:35.731 [2024-07-15 18:21:28.018992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:35.731 [2024-07-15 18:21:28.019019] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2c9100635180 00:07:35.731 [2024-07-15 18:21:28.019023] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.731 [2024-07-15 18:21:28.019043] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2c9100697e20 00:07:35.731 [2024-07-15 18:21:28.019106] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2c9100635180 00:07:35.731 [2024-07-15 18:21:28.019111] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2c9100635180 00:07:35.731 [2024-07-15 18:21:28.019137] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.731 pt2 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.731 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.001 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:36.001 "name": "raid_bdev1", 00:07:36.001 "uuid": "0a576de3-42d7-11ef-9ade-d5fc5159efa5", 00:07:36.001 "strip_size_kb": 64, 00:07:36.001 "state": "online", 00:07:36.002 "raid_level": "raid0", 00:07:36.002 "superblock": true, 00:07:36.002 "num_base_bdevs": 2, 00:07:36.002 "num_base_bdevs_discovered": 2, 00:07:36.002 "num_base_bdevs_operational": 2, 00:07:36.002 "base_bdevs_list": [ 00:07:36.002 { 00:07:36.002 "name": "pt1", 00:07:36.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.002 "is_configured": true, 00:07:36.002 "data_offset": 2048, 00:07:36.002 "data_size": 63488 00:07:36.002 }, 00:07:36.002 { 00:07:36.002 "name": "pt2", 00:07:36.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.002 "is_configured": true, 00:07:36.002 "data_offset": 2048, 00:07:36.002 "data_size": 63488 00:07:36.002 } 00:07:36.002 ] 00:07:36.002 }' 00:07:36.002 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:36.002 18:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.258 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:36.258 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:36.258 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:36.258 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:36.258 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:36.259 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:36.259 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:36.259 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:36.516 [2024-07-15 18:21:28.871257] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.781 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:36.781 "name": "raid_bdev1", 00:07:36.781 "aliases": [ 00:07:36.781 "0a576de3-42d7-11ef-9ade-d5fc5159efa5" 00:07:36.781 ], 00:07:36.781 "product_name": "Raid Volume", 00:07:36.781 "block_size": 512, 00:07:36.781 "num_blocks": 126976, 00:07:36.781 "uuid": "0a576de3-42d7-11ef-9ade-d5fc5159efa5", 00:07:36.781 "assigned_rate_limits": { 00:07:36.781 "rw_ios_per_sec": 0, 00:07:36.781 "rw_mbytes_per_sec": 0, 00:07:36.781 "r_mbytes_per_sec": 0, 00:07:36.781 "w_mbytes_per_sec": 0 00:07:36.781 }, 00:07:36.781 "claimed": false, 00:07:36.781 "zoned": false, 00:07:36.781 "supported_io_types": { 00:07:36.781 "read": true, 00:07:36.781 "write": true, 00:07:36.781 "unmap": true, 00:07:36.781 "flush": true, 00:07:36.781 "reset": true, 00:07:36.781 "nvme_admin": false, 00:07:36.781 "nvme_io": false, 00:07:36.781 "nvme_io_md": false, 00:07:36.781 "write_zeroes": true, 00:07:36.781 "zcopy": false, 00:07:36.781 "get_zone_info": false, 00:07:36.781 "zone_management": false, 00:07:36.781 "zone_append": false, 00:07:36.781 "compare": false, 00:07:36.781 "compare_and_write": false, 00:07:36.781 "abort": false, 00:07:36.781 "seek_hole": false, 00:07:36.781 "seek_data": false, 00:07:36.781 "copy": false, 00:07:36.781 "nvme_iov_md": false 00:07:36.781 }, 00:07:36.781 "memory_domains": [ 00:07:36.781 { 00:07:36.781 "dma_device_id": "system", 00:07:36.781 "dma_device_type": 1 00:07:36.781 }, 00:07:36.781 { 00:07:36.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.781 "dma_device_type": 2 00:07:36.781 }, 00:07:36.781 { 00:07:36.781 "dma_device_id": "system", 00:07:36.781 "dma_device_type": 1 00:07:36.781 }, 00:07:36.781 { 00:07:36.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.781 "dma_device_type": 2 00:07:36.781 } 00:07:36.781 ], 00:07:36.781 "driver_specific": { 00:07:36.781 "raid": { 00:07:36.781 "uuid": "0a576de3-42d7-11ef-9ade-d5fc5159efa5", 00:07:36.781 "strip_size_kb": 64, 00:07:36.781 "state": "online", 00:07:36.781 "raid_level": "raid0", 00:07:36.781 "superblock": true, 00:07:36.781 "num_base_bdevs": 2, 00:07:36.781 "num_base_bdevs_discovered": 2, 00:07:36.781 "num_base_bdevs_operational": 2, 00:07:36.781 "base_bdevs_list": [ 00:07:36.781 { 00:07:36.781 "name": "pt1", 00:07:36.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.781 "is_configured": true, 00:07:36.781 "data_offset": 2048, 00:07:36.781 "data_size": 63488 00:07:36.781 }, 00:07:36.781 { 00:07:36.781 "name": "pt2", 00:07:36.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.781 "is_configured": true, 00:07:36.781 "data_offset": 2048, 00:07:36.781 "data_size": 63488 00:07:36.781 } 00:07:36.781 ] 00:07:36.781 } 00:07:36.781 } 00:07:36.781 }' 00:07:36.781 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:36.781 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:36.781 pt2' 00:07:36.781 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:36.781 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:36.781 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:37.041 "name": "pt1", 00:07:37.041 "aliases": [ 00:07:37.041 "00000000-0000-0000-0000-000000000001" 00:07:37.041 ], 00:07:37.041 "product_name": "passthru", 00:07:37.041 "block_size": 512, 00:07:37.041 "num_blocks": 65536, 00:07:37.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.041 "assigned_rate_limits": { 00:07:37.041 "rw_ios_per_sec": 0, 00:07:37.041 "rw_mbytes_per_sec": 0, 00:07:37.041 "r_mbytes_per_sec": 0, 00:07:37.041 "w_mbytes_per_sec": 0 00:07:37.041 }, 00:07:37.041 "claimed": true, 00:07:37.041 "claim_type": "exclusive_write", 00:07:37.041 "zoned": false, 00:07:37.041 "supported_io_types": { 00:07:37.041 "read": true, 00:07:37.041 "write": true, 00:07:37.041 "unmap": true, 00:07:37.041 "flush": true, 00:07:37.041 "reset": true, 00:07:37.041 "nvme_admin": false, 00:07:37.041 "nvme_io": false, 00:07:37.041 "nvme_io_md": false, 00:07:37.041 "write_zeroes": true, 00:07:37.041 "zcopy": true, 00:07:37.041 "get_zone_info": false, 00:07:37.041 "zone_management": false, 00:07:37.041 "zone_append": false, 00:07:37.041 "compare": false, 00:07:37.041 "compare_and_write": false, 00:07:37.041 "abort": true, 00:07:37.041 "seek_hole": false, 00:07:37.041 "seek_data": false, 00:07:37.041 "copy": true, 00:07:37.041 "nvme_iov_md": false 00:07:37.041 }, 00:07:37.041 "memory_domains": [ 00:07:37.041 { 00:07:37.041 "dma_device_id": "system", 00:07:37.041 "dma_device_type": 1 00:07:37.041 }, 00:07:37.041 { 00:07:37.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.041 "dma_device_type": 2 00:07:37.041 } 00:07:37.041 ], 00:07:37.041 "driver_specific": { 00:07:37.041 "passthru": { 00:07:37.041 "name": "pt1", 00:07:37.041 "base_bdev_name": "malloc1" 00:07:37.041 } 00:07:37.041 } 00:07:37.041 }' 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:37.041 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:37.300 "name": "pt2", 00:07:37.300 "aliases": [ 00:07:37.300 "00000000-0000-0000-0000-000000000002" 00:07:37.300 ], 00:07:37.300 "product_name": "passthru", 00:07:37.300 "block_size": 512, 00:07:37.300 "num_blocks": 65536, 00:07:37.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.300 "assigned_rate_limits": { 00:07:37.300 "rw_ios_per_sec": 0, 00:07:37.300 "rw_mbytes_per_sec": 0, 00:07:37.300 "r_mbytes_per_sec": 0, 00:07:37.300 "w_mbytes_per_sec": 0 00:07:37.300 }, 00:07:37.300 "claimed": true, 00:07:37.300 "claim_type": "exclusive_write", 00:07:37.300 "zoned": false, 00:07:37.300 "supported_io_types": { 00:07:37.300 "read": true, 00:07:37.300 "write": true, 00:07:37.300 "unmap": true, 00:07:37.300 "flush": true, 00:07:37.300 "reset": true, 00:07:37.300 "nvme_admin": false, 00:07:37.300 "nvme_io": false, 00:07:37.300 "nvme_io_md": false, 00:07:37.300 "write_zeroes": true, 00:07:37.300 "zcopy": true, 00:07:37.300 "get_zone_info": false, 00:07:37.300 "zone_management": false, 00:07:37.300 "zone_append": false, 00:07:37.300 "compare": false, 00:07:37.300 "compare_and_write": false, 00:07:37.300 "abort": true, 00:07:37.300 "seek_hole": false, 00:07:37.300 "seek_data": false, 00:07:37.300 "copy": true, 00:07:37.300 "nvme_iov_md": false 00:07:37.300 }, 00:07:37.300 "memory_domains": [ 00:07:37.300 { 00:07:37.300 "dma_device_id": "system", 00:07:37.300 "dma_device_type": 1 00:07:37.300 }, 00:07:37.300 { 00:07:37.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.300 "dma_device_type": 2 00:07:37.300 } 00:07:37.300 ], 00:07:37.300 "driver_specific": { 00:07:37.300 "passthru": { 00:07:37.300 "name": "pt2", 00:07:37.300 "base_bdev_name": "malloc2" 00:07:37.300 } 00:07:37.300 } 00:07:37.300 }' 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:37.300 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:37.561 [2024-07-15 18:21:29.879855] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0a576de3-42d7-11ef-9ade-d5fc5159efa5 '!=' 0a576de3-42d7-11ef-9ade-d5fc5159efa5 ']' 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49197 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 49197 ']' 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 49197 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 49197 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:37.561 killing process with pid 49197 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49197' 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 49197 00:07:37.561 [2024-07-15 18:21:29.910976] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.561 [2024-07-15 18:21:29.911019] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.561 [2024-07-15 18:21:29.911031] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.561 [2024-07-15 18:21:29.911036] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2c9100635180 name raid_bdev1, state offline 00:07:37.561 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 49197 00:07:37.820 [2024-07-15 18:21:29.925554] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.820 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:37.820 00:07:37.820 real 0m9.566s 00:07:37.820 user 0m16.648s 00:07:37.820 sys 0m1.657s 00:07:37.820 18:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.820 18:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.820 ************************************ 00:07:37.820 END TEST raid_superblock_test 00:07:37.820 ************************************ 00:07:38.079 18:21:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:38.079 18:21:30 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:38.079 18:21:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:38.079 18:21:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.079 18:21:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.079 ************************************ 00:07:38.079 START TEST raid_read_error_test 00:07:38.079 ************************************ 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ZI5ffzTma4 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49467 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49467 /var/tmp/spdk-raid.sock 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 49467 ']' 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.079 18:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.079 [2024-07-15 18:21:30.214739] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:38.079 [2024-07-15 18:21:30.215003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:38.646 EAL: TSC is not safe to use in SMP mode 00:07:38.646 EAL: TSC is not invariant 00:07:38.646 [2024-07-15 18:21:30.834884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.646 [2024-07-15 18:21:30.950546] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:38.646 [2024-07-15 18:21:30.952634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.646 [2024-07-15 18:21:30.953408] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.646 [2024-07-15 18:21:30.953422] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.214 18:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.214 18:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:39.214 18:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:39.214 18:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.214 BaseBdev1_malloc 00:07:39.214 18:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:39.473 true 00:07:39.473 18:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.732 [2024-07-15 18:21:32.026218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.733 [2024-07-15 18:21:32.026286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.733 [2024-07-15 18:21:32.026315] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15deafc34780 00:07:39.733 [2024-07-15 18:21:32.026324] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.733 [2024-07-15 18:21:32.027024] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.733 [2024-07-15 18:21:32.027052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.733 BaseBdev1 00:07:39.733 18:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:39.733 18:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.990 BaseBdev2_malloc 00:07:39.990 18:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:40.247 true 00:07:40.247 18:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:40.506 [2024-07-15 18:21:32.738641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:40.506 [2024-07-15 18:21:32.738702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.506 [2024-07-15 18:21:32.738732] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15deafc34c80 00:07:40.506 [2024-07-15 18:21:32.738741] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.506 [2024-07-15 18:21:32.739424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.506 [2024-07-15 18:21:32.739451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:40.506 BaseBdev2 00:07:40.506 18:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:40.765 [2024-07-15 18:21:33.010795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.765 [2024-07-15 18:21:33.011386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.765 [2024-07-15 18:21:33.011484] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15deafc34f00 00:07:40.765 [2024-07-15 18:21:33.011490] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.765 [2024-07-15 18:21:33.011525] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15deafca0e20 00:07:40.765 [2024-07-15 18:21:33.011604] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15deafc34f00 00:07:40.765 [2024-07-15 18:21:33.011608] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15deafc34f00 00:07:40.765 [2024-07-15 18:21:33.011642] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:40.765 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.027 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:41.027 "name": "raid_bdev1", 00:07:41.027 "uuid": "104499db-42d7-11ef-9ade-d5fc5159efa5", 00:07:41.027 "strip_size_kb": 64, 00:07:41.027 "state": "online", 00:07:41.027 "raid_level": "raid0", 00:07:41.027 "superblock": true, 00:07:41.027 "num_base_bdevs": 2, 00:07:41.027 "num_base_bdevs_discovered": 2, 00:07:41.027 "num_base_bdevs_operational": 2, 00:07:41.027 "base_bdevs_list": [ 00:07:41.027 { 00:07:41.027 "name": "BaseBdev1", 00:07:41.027 "uuid": "a8504a5c-1613-ce50-ac01-b34c5d8f3c4e", 00:07:41.027 "is_configured": true, 00:07:41.027 "data_offset": 2048, 00:07:41.027 "data_size": 63488 00:07:41.027 }, 00:07:41.027 { 00:07:41.027 "name": "BaseBdev2", 00:07:41.027 "uuid": "c07d1536-14ae-065b-aa9b-0947b47f11ca", 00:07:41.027 "is_configured": true, 00:07:41.027 "data_offset": 2048, 00:07:41.027 "data_size": 63488 00:07:41.028 } 00:07:41.028 ] 00:07:41.028 }' 00:07:41.028 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:41.028 18:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.286 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:41.286 18:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:41.543 [2024-07-15 18:21:33.731467] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15deafca0ec0 00:07:42.475 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:42.747 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.748 18:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:43.036 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:43.036 "name": "raid_bdev1", 00:07:43.036 "uuid": "104499db-42d7-11ef-9ade-d5fc5159efa5", 00:07:43.036 "strip_size_kb": 64, 00:07:43.036 "state": "online", 00:07:43.036 "raid_level": "raid0", 00:07:43.036 "superblock": true, 00:07:43.036 "num_base_bdevs": 2, 00:07:43.036 "num_base_bdevs_discovered": 2, 00:07:43.036 "num_base_bdevs_operational": 2, 00:07:43.036 "base_bdevs_list": [ 00:07:43.036 { 00:07:43.036 "name": "BaseBdev1", 00:07:43.036 "uuid": "a8504a5c-1613-ce50-ac01-b34c5d8f3c4e", 00:07:43.036 "is_configured": true, 00:07:43.036 "data_offset": 2048, 00:07:43.036 "data_size": 63488 00:07:43.036 }, 00:07:43.036 { 00:07:43.036 "name": "BaseBdev2", 00:07:43.036 "uuid": "c07d1536-14ae-065b-aa9b-0947b47f11ca", 00:07:43.036 "is_configured": true, 00:07:43.036 "data_offset": 2048, 00:07:43.036 "data_size": 63488 00:07:43.036 } 00:07:43.036 ] 00:07:43.036 }' 00:07:43.036 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:43.036 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.294 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:43.552 [2024-07-15 18:21:35.787573] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.552 [2024-07-15 18:21:35.787600] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.552 [2024-07-15 18:21:35.787965] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.552 [2024-07-15 18:21:35.787975] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.552 [2024-07-15 18:21:35.787981] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.552 [2024-07-15 18:21:35.787986] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15deafc34f00 name raid_bdev1, state offline 00:07:43.552 0 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49467 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 49467 ']' 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 49467 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49467 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:43.552 killing process with pid 49467 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49467' 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 49467 00:07:43.552 [2024-07-15 18:21:35.814991] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.552 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 49467 00:07:43.552 [2024-07-15 18:21:35.828727] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.830 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ZI5ffzTma4 00:07:43.830 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:43.831 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:43.831 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:07:43.831 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:43.831 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:43.831 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:43.831 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:07:43.831 00:07:43.831 real 0m5.865s 00:07:43.831 user 0m8.927s 00:07:43.831 sys 0m1.074s 00:07:43.831 18:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.831 18:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 ************************************ 00:07:43.831 END TEST raid_read_error_test 00:07:43.831 ************************************ 00:07:43.831 18:21:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:43.831 18:21:36 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:43.831 18:21:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:43.831 18:21:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.831 18:21:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 ************************************ 00:07:43.831 START TEST raid_write_error_test 00:07:43.831 ************************************ 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.KQE61cCZXD 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49594 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49594 /var/tmp/spdk-raid.sock 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 49594 ']' 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.831 18:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 [2024-07-15 18:21:36.123458] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:43.831 [2024-07-15 18:21:36.123705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:44.397 EAL: TSC is not safe to use in SMP mode 00:07:44.397 EAL: TSC is not invariant 00:07:44.397 [2024-07-15 18:21:36.752260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.655 [2024-07-15 18:21:36.862686] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:44.655 [2024-07-15 18:21:36.865029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.655 [2024-07-15 18:21:36.865825] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.655 [2024-07-15 18:21:36.865839] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.912 18:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.912 18:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:44.912 18:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:44.912 18:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:45.170 BaseBdev1_malloc 00:07:45.170 18:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:45.428 true 00:07:45.428 18:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:45.686 [2024-07-15 18:21:37.998640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:45.686 [2024-07-15 18:21:37.998717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.686 [2024-07-15 18:21:37.998746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20d0bb434780 00:07:45.686 [2024-07-15 18:21:37.998755] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.686 [2024-07-15 18:21:37.999480] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.686 [2024-07-15 18:21:37.999505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:45.686 BaseBdev1 00:07:45.686 18:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:45.686 18:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:45.944 BaseBdev2_malloc 00:07:45.944 18:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:46.202 true 00:07:46.202 18:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:46.461 [2024-07-15 18:21:38.795081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:46.461 [2024-07-15 18:21:38.795141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.461 [2024-07-15 18:21:38.795169] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20d0bb434c80 00:07:46.461 [2024-07-15 18:21:38.795178] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.461 [2024-07-15 18:21:38.795885] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.461 [2024-07-15 18:21:38.795911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:46.461 BaseBdev2 00:07:46.461 18:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:46.770 [2024-07-15 18:21:39.059231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.770 [2024-07-15 18:21:39.059860] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.770 [2024-07-15 18:21:39.059928] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20d0bb434f00 00:07:46.770 [2024-07-15 18:21:39.059934] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.770 [2024-07-15 18:21:39.059967] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20d0bb4a0e20 00:07:46.770 [2024-07-15 18:21:39.060046] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20d0bb434f00 00:07:46.770 [2024-07-15 18:21:39.060051] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20d0bb434f00 00:07:46.770 [2024-07-15 18:21:39.060081] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:46.770 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.029 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:47.029 "name": "raid_bdev1", 00:07:47.029 "uuid": "13df84d1-42d7-11ef-9ade-d5fc5159efa5", 00:07:47.029 "strip_size_kb": 64, 00:07:47.029 "state": "online", 00:07:47.029 "raid_level": "raid0", 00:07:47.029 "superblock": true, 00:07:47.029 "num_base_bdevs": 2, 00:07:47.029 "num_base_bdevs_discovered": 2, 00:07:47.029 "num_base_bdevs_operational": 2, 00:07:47.029 "base_bdevs_list": [ 00:07:47.029 { 00:07:47.029 "name": "BaseBdev1", 00:07:47.029 "uuid": "117da248-4c5b-1a5a-8e81-03f794059cc4", 00:07:47.029 "is_configured": true, 00:07:47.029 "data_offset": 2048, 00:07:47.029 "data_size": 63488 00:07:47.029 }, 00:07:47.029 { 00:07:47.029 "name": "BaseBdev2", 00:07:47.029 "uuid": "385a4290-7ece-3353-a867-8628e5e93698", 00:07:47.029 "is_configured": true, 00:07:47.029 "data_offset": 2048, 00:07:47.029 "data_size": 63488 00:07:47.029 } 00:07:47.029 ] 00:07:47.029 }' 00:07:47.029 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:47.029 18:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.595 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:47.595 18:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:47.595 [2024-07-15 18:21:39.787818] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20d0bb4a0ec0 00:07:48.531 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.790 18:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.048 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:49.048 "name": "raid_bdev1", 00:07:49.048 "uuid": "13df84d1-42d7-11ef-9ade-d5fc5159efa5", 00:07:49.048 "strip_size_kb": 64, 00:07:49.048 "state": "online", 00:07:49.048 "raid_level": "raid0", 00:07:49.048 "superblock": true, 00:07:49.048 "num_base_bdevs": 2, 00:07:49.048 "num_base_bdevs_discovered": 2, 00:07:49.048 "num_base_bdevs_operational": 2, 00:07:49.048 "base_bdevs_list": [ 00:07:49.048 { 00:07:49.048 "name": "BaseBdev1", 00:07:49.048 "uuid": "117da248-4c5b-1a5a-8e81-03f794059cc4", 00:07:49.048 "is_configured": true, 00:07:49.048 "data_offset": 2048, 00:07:49.048 "data_size": 63488 00:07:49.048 }, 00:07:49.048 { 00:07:49.048 "name": "BaseBdev2", 00:07:49.048 "uuid": "385a4290-7ece-3353-a867-8628e5e93698", 00:07:49.048 "is_configured": true, 00:07:49.048 "data_offset": 2048, 00:07:49.048 "data_size": 63488 00:07:49.048 } 00:07:49.048 ] 00:07:49.048 }' 00:07:49.048 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:49.048 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.306 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:49.564 [2024-07-15 18:21:41.807366] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.564 [2024-07-15 18:21:41.807397] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.564 [2024-07-15 18:21:41.807739] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.564 [2024-07-15 18:21:41.807749] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.564 [2024-07-15 18:21:41.807756] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.564 [2024-07-15 18:21:41.807760] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20d0bb434f00 name raid_bdev1, state offline 00:07:49.564 0 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49594 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 49594 ']' 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 49594 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49594 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:49.564 killing process with pid 49594 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49594' 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 49594 00:07:49.564 [2024-07-15 18:21:41.841601] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.564 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 49594 00:07:49.564 [2024-07-15 18:21:41.856209] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.KQE61cCZXD 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:07:49.822 00:07:49.822 real 0m5.977s 00:07:49.822 user 0m9.043s 00:07:49.822 sys 0m1.211s 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.822 ************************************ 00:07:49.822 18:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.822 END TEST raid_write_error_test 00:07:49.822 ************************************ 00:07:49.822 18:21:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:49.822 18:21:42 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:49.822 18:21:42 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:49.822 18:21:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:49.822 18:21:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.822 18:21:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.822 ************************************ 00:07:49.822 START TEST raid_state_function_test 00:07:49.822 ************************************ 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:49.822 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49720 00:07:49.823 Process raid pid: 49720 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49720' 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49720 /var/tmp/spdk-raid.sock 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 49720 ']' 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.823 18:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.823 [2024-07-15 18:21:42.148467] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:49.823 [2024-07-15 18:21:42.148732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:50.386 EAL: TSC is not safe to use in SMP mode 00:07:50.386 EAL: TSC is not invariant 00:07:50.643 [2024-07-15 18:21:42.758051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.643 [2024-07-15 18:21:42.870545] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:50.643 [2024-07-15 18:21:42.872810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.643 [2024-07-15 18:21:42.873592] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.643 [2024-07-15 18:21:42.873606] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.901 18:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.901 18:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:07:50.901 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:51.159 [2024-07-15 18:21:43.486123] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.159 [2024-07-15 18:21:43.486226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.159 [2024-07-15 18:21:43.486237] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.159 [2024-07-15 18:21:43.486254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.159 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.430 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:51.430 "name": "Existed_Raid", 00:07:51.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.430 "strip_size_kb": 64, 00:07:51.430 "state": "configuring", 00:07:51.430 "raid_level": "concat", 00:07:51.430 "superblock": false, 00:07:51.430 "num_base_bdevs": 2, 00:07:51.430 "num_base_bdevs_discovered": 0, 00:07:51.430 "num_base_bdevs_operational": 2, 00:07:51.430 "base_bdevs_list": [ 00:07:51.430 { 00:07:51.430 "name": "BaseBdev1", 00:07:51.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.430 "is_configured": false, 00:07:51.430 "data_offset": 0, 00:07:51.430 "data_size": 0 00:07:51.430 }, 00:07:51.430 { 00:07:51.430 "name": "BaseBdev2", 00:07:51.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.430 "is_configured": false, 00:07:51.430 "data_offset": 0, 00:07:51.430 "data_size": 0 00:07:51.430 } 00:07:51.430 ] 00:07:51.430 }' 00:07:51.430 18:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:51.430 18:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.004 18:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:52.004 [2024-07-15 18:21:44.326495] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.004 [2024-07-15 18:21:44.326540] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x223214a34500 name Existed_Raid, state configuring 00:07:52.004 18:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:52.262 [2024-07-15 18:21:44.602636] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.262 [2024-07-15 18:21:44.602695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.262 [2024-07-15 18:21:44.602701] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.262 [2024-07-15 18:21:44.602710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.262 18:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.521 [2024-07-15 18:21:44.871800] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.521 BaseBdev1 00:07:52.521 18:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:52.521 18:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:52.521 18:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:52.521 18:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:52.521 18:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:52.521 18:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:52.521 18:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:53.087 [ 00:07:53.087 { 00:07:53.087 "name": "BaseBdev1", 00:07:53.087 "aliases": [ 00:07:53.087 "17564a77-42d7-11ef-9ade-d5fc5159efa5" 00:07:53.087 ], 00:07:53.087 "product_name": "Malloc disk", 00:07:53.087 "block_size": 512, 00:07:53.087 "num_blocks": 65536, 00:07:53.087 "uuid": "17564a77-42d7-11ef-9ade-d5fc5159efa5", 00:07:53.087 "assigned_rate_limits": { 00:07:53.087 "rw_ios_per_sec": 0, 00:07:53.087 "rw_mbytes_per_sec": 0, 00:07:53.087 "r_mbytes_per_sec": 0, 00:07:53.087 "w_mbytes_per_sec": 0 00:07:53.087 }, 00:07:53.087 "claimed": true, 00:07:53.087 "claim_type": "exclusive_write", 00:07:53.087 "zoned": false, 00:07:53.087 "supported_io_types": { 00:07:53.087 "read": true, 00:07:53.087 "write": true, 00:07:53.087 "unmap": true, 00:07:53.087 "flush": true, 00:07:53.087 "reset": true, 00:07:53.087 "nvme_admin": false, 00:07:53.087 "nvme_io": false, 00:07:53.087 "nvme_io_md": false, 00:07:53.087 "write_zeroes": true, 00:07:53.087 "zcopy": true, 00:07:53.087 "get_zone_info": false, 00:07:53.087 "zone_management": false, 00:07:53.087 "zone_append": false, 00:07:53.087 "compare": false, 00:07:53.087 "compare_and_write": false, 00:07:53.087 "abort": true, 00:07:53.087 "seek_hole": false, 00:07:53.087 "seek_data": false, 00:07:53.087 "copy": true, 00:07:53.087 "nvme_iov_md": false 00:07:53.087 }, 00:07:53.087 "memory_domains": [ 00:07:53.087 { 00:07:53.087 "dma_device_id": "system", 00:07:53.087 "dma_device_type": 1 00:07:53.087 }, 00:07:53.087 { 00:07:53.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.087 "dma_device_type": 2 00:07:53.087 } 00:07:53.087 ], 00:07:53.087 "driver_specific": {} 00:07:53.087 } 00:07:53.087 ] 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.087 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.346 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:53.346 "name": "Existed_Raid", 00:07:53.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.346 "strip_size_kb": 64, 00:07:53.346 "state": "configuring", 00:07:53.346 "raid_level": "concat", 00:07:53.346 "superblock": false, 00:07:53.346 "num_base_bdevs": 2, 00:07:53.346 "num_base_bdevs_discovered": 1, 00:07:53.346 "num_base_bdevs_operational": 2, 00:07:53.346 "base_bdevs_list": [ 00:07:53.346 { 00:07:53.346 "name": "BaseBdev1", 00:07:53.346 "uuid": "17564a77-42d7-11ef-9ade-d5fc5159efa5", 00:07:53.346 "is_configured": true, 00:07:53.346 "data_offset": 0, 00:07:53.346 "data_size": 65536 00:07:53.346 }, 00:07:53.346 { 00:07:53.346 "name": "BaseBdev2", 00:07:53.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.346 "is_configured": false, 00:07:53.346 "data_offset": 0, 00:07:53.346 "data_size": 0 00:07:53.346 } 00:07:53.346 ] 00:07:53.346 }' 00:07:53.346 18:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:53.346 18:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.913 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:54.172 [2024-07-15 18:21:46.303410] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.172 [2024-07-15 18:21:46.303447] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x223214a34500 name Existed_Raid, state configuring 00:07:54.172 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:54.430 [2024-07-15 18:21:46.575527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.430 [2024-07-15 18:21:46.576362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.430 [2024-07-15 18:21:46.576404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.430 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.689 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:54.689 "name": "Existed_Raid", 00:07:54.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.689 "strip_size_kb": 64, 00:07:54.689 "state": "configuring", 00:07:54.689 "raid_level": "concat", 00:07:54.689 "superblock": false, 00:07:54.689 "num_base_bdevs": 2, 00:07:54.689 "num_base_bdevs_discovered": 1, 00:07:54.689 "num_base_bdevs_operational": 2, 00:07:54.689 "base_bdevs_list": [ 00:07:54.689 { 00:07:54.689 "name": "BaseBdev1", 00:07:54.689 "uuid": "17564a77-42d7-11ef-9ade-d5fc5159efa5", 00:07:54.689 "is_configured": true, 00:07:54.689 "data_offset": 0, 00:07:54.689 "data_size": 65536 00:07:54.689 }, 00:07:54.689 { 00:07:54.689 "name": "BaseBdev2", 00:07:54.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.689 "is_configured": false, 00:07:54.689 "data_offset": 0, 00:07:54.689 "data_size": 0 00:07:54.689 } 00:07:54.689 ] 00:07:54.689 }' 00:07:54.689 18:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:54.689 18:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.946 18:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:55.204 [2024-07-15 18:21:47.416049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.204 [2024-07-15 18:21:47.416080] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x223214a34a00 00:07:55.204 [2024-07-15 18:21:47.416085] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:55.204 [2024-07-15 18:21:47.416107] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x223214a97e20 00:07:55.204 [2024-07-15 18:21:47.416230] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x223214a34a00 00:07:55.204 [2024-07-15 18:21:47.416234] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x223214a34a00 00:07:55.204 [2024-07-15 18:21:47.416269] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.204 BaseBdev2 00:07:55.204 18:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:55.204 18:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:55.204 18:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:55.204 18:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:55.204 18:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:55.204 18:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:55.204 18:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:55.463 18:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:55.722 [ 00:07:55.722 { 00:07:55.722 "name": "BaseBdev2", 00:07:55.722 "aliases": [ 00:07:55.722 "18daa602-42d7-11ef-9ade-d5fc5159efa5" 00:07:55.722 ], 00:07:55.722 "product_name": "Malloc disk", 00:07:55.722 "block_size": 512, 00:07:55.722 "num_blocks": 65536, 00:07:55.722 "uuid": "18daa602-42d7-11ef-9ade-d5fc5159efa5", 00:07:55.722 "assigned_rate_limits": { 00:07:55.722 "rw_ios_per_sec": 0, 00:07:55.722 "rw_mbytes_per_sec": 0, 00:07:55.722 "r_mbytes_per_sec": 0, 00:07:55.722 "w_mbytes_per_sec": 0 00:07:55.722 }, 00:07:55.722 "claimed": true, 00:07:55.722 "claim_type": "exclusive_write", 00:07:55.722 "zoned": false, 00:07:55.722 "supported_io_types": { 00:07:55.722 "read": true, 00:07:55.722 "write": true, 00:07:55.722 "unmap": true, 00:07:55.722 "flush": true, 00:07:55.722 "reset": true, 00:07:55.722 "nvme_admin": false, 00:07:55.722 "nvme_io": false, 00:07:55.722 "nvme_io_md": false, 00:07:55.722 "write_zeroes": true, 00:07:55.722 "zcopy": true, 00:07:55.722 "get_zone_info": false, 00:07:55.722 "zone_management": false, 00:07:55.722 "zone_append": false, 00:07:55.722 "compare": false, 00:07:55.722 "compare_and_write": false, 00:07:55.722 "abort": true, 00:07:55.722 "seek_hole": false, 00:07:55.722 "seek_data": false, 00:07:55.722 "copy": true, 00:07:55.722 "nvme_iov_md": false 00:07:55.722 }, 00:07:55.722 "memory_domains": [ 00:07:55.722 { 00:07:55.722 "dma_device_id": "system", 00:07:55.722 "dma_device_type": 1 00:07:55.722 }, 00:07:55.722 { 00:07:55.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.722 "dma_device_type": 2 00:07:55.722 } 00:07:55.722 ], 00:07:55.722 "driver_specific": {} 00:07:55.722 } 00:07:55.722 ] 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.722 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.046 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:56.046 "name": "Existed_Raid", 00:07:56.046 "uuid": "18daad07-42d7-11ef-9ade-d5fc5159efa5", 00:07:56.046 "strip_size_kb": 64, 00:07:56.046 "state": "online", 00:07:56.046 "raid_level": "concat", 00:07:56.046 "superblock": false, 00:07:56.046 "num_base_bdevs": 2, 00:07:56.046 "num_base_bdevs_discovered": 2, 00:07:56.046 "num_base_bdevs_operational": 2, 00:07:56.046 "base_bdevs_list": [ 00:07:56.046 { 00:07:56.046 "name": "BaseBdev1", 00:07:56.046 "uuid": "17564a77-42d7-11ef-9ade-d5fc5159efa5", 00:07:56.046 "is_configured": true, 00:07:56.046 "data_offset": 0, 00:07:56.046 "data_size": 65536 00:07:56.046 }, 00:07:56.046 { 00:07:56.046 "name": "BaseBdev2", 00:07:56.046 "uuid": "18daa602-42d7-11ef-9ade-d5fc5159efa5", 00:07:56.046 "is_configured": true, 00:07:56.046 "data_offset": 0, 00:07:56.046 "data_size": 65536 00:07:56.046 } 00:07:56.046 ] 00:07:56.046 }' 00:07:56.046 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:56.046 18:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.306 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.306 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:56.306 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:56.306 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:56.306 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:56.306 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:56.306 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:56.306 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:56.564 [2024-07-15 18:21:48.812515] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.564 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:56.564 "name": "Existed_Raid", 00:07:56.564 "aliases": [ 00:07:56.564 "18daad07-42d7-11ef-9ade-d5fc5159efa5" 00:07:56.564 ], 00:07:56.564 "product_name": "Raid Volume", 00:07:56.564 "block_size": 512, 00:07:56.564 "num_blocks": 131072, 00:07:56.564 "uuid": "18daad07-42d7-11ef-9ade-d5fc5159efa5", 00:07:56.564 "assigned_rate_limits": { 00:07:56.564 "rw_ios_per_sec": 0, 00:07:56.564 "rw_mbytes_per_sec": 0, 00:07:56.564 "r_mbytes_per_sec": 0, 00:07:56.564 "w_mbytes_per_sec": 0 00:07:56.564 }, 00:07:56.564 "claimed": false, 00:07:56.564 "zoned": false, 00:07:56.564 "supported_io_types": { 00:07:56.564 "read": true, 00:07:56.564 "write": true, 00:07:56.564 "unmap": true, 00:07:56.564 "flush": true, 00:07:56.564 "reset": true, 00:07:56.564 "nvme_admin": false, 00:07:56.564 "nvme_io": false, 00:07:56.564 "nvme_io_md": false, 00:07:56.564 "write_zeroes": true, 00:07:56.564 "zcopy": false, 00:07:56.564 "get_zone_info": false, 00:07:56.564 "zone_management": false, 00:07:56.564 "zone_append": false, 00:07:56.564 "compare": false, 00:07:56.564 "compare_and_write": false, 00:07:56.564 "abort": false, 00:07:56.564 "seek_hole": false, 00:07:56.564 "seek_data": false, 00:07:56.564 "copy": false, 00:07:56.564 "nvme_iov_md": false 00:07:56.564 }, 00:07:56.564 "memory_domains": [ 00:07:56.564 { 00:07:56.564 "dma_device_id": "system", 00:07:56.564 "dma_device_type": 1 00:07:56.564 }, 00:07:56.564 { 00:07:56.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.564 "dma_device_type": 2 00:07:56.564 }, 00:07:56.564 { 00:07:56.564 "dma_device_id": "system", 00:07:56.565 "dma_device_type": 1 00:07:56.565 }, 00:07:56.565 { 00:07:56.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.565 "dma_device_type": 2 00:07:56.565 } 00:07:56.565 ], 00:07:56.565 "driver_specific": { 00:07:56.565 "raid": { 00:07:56.565 "uuid": "18daad07-42d7-11ef-9ade-d5fc5159efa5", 00:07:56.565 "strip_size_kb": 64, 00:07:56.565 "state": "online", 00:07:56.565 "raid_level": "concat", 00:07:56.565 "superblock": false, 00:07:56.565 "num_base_bdevs": 2, 00:07:56.565 "num_base_bdevs_discovered": 2, 00:07:56.565 "num_base_bdevs_operational": 2, 00:07:56.565 "base_bdevs_list": [ 00:07:56.565 { 00:07:56.565 "name": "BaseBdev1", 00:07:56.565 "uuid": "17564a77-42d7-11ef-9ade-d5fc5159efa5", 00:07:56.565 "is_configured": true, 00:07:56.565 "data_offset": 0, 00:07:56.565 "data_size": 65536 00:07:56.565 }, 00:07:56.565 { 00:07:56.565 "name": "BaseBdev2", 00:07:56.565 "uuid": "18daa602-42d7-11ef-9ade-d5fc5159efa5", 00:07:56.565 "is_configured": true, 00:07:56.565 "data_offset": 0, 00:07:56.565 "data_size": 65536 00:07:56.565 } 00:07:56.565 ] 00:07:56.565 } 00:07:56.565 } 00:07:56.565 }' 00:07:56.565 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.565 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:56.565 BaseBdev2' 00:07:56.565 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:56.565 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:56.565 18:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:56.824 "name": "BaseBdev1", 00:07:56.824 "aliases": [ 00:07:56.824 "17564a77-42d7-11ef-9ade-d5fc5159efa5" 00:07:56.824 ], 00:07:56.824 "product_name": "Malloc disk", 00:07:56.824 "block_size": 512, 00:07:56.824 "num_blocks": 65536, 00:07:56.824 "uuid": "17564a77-42d7-11ef-9ade-d5fc5159efa5", 00:07:56.824 "assigned_rate_limits": { 00:07:56.824 "rw_ios_per_sec": 0, 00:07:56.824 "rw_mbytes_per_sec": 0, 00:07:56.824 "r_mbytes_per_sec": 0, 00:07:56.824 "w_mbytes_per_sec": 0 00:07:56.824 }, 00:07:56.824 "claimed": true, 00:07:56.824 "claim_type": "exclusive_write", 00:07:56.824 "zoned": false, 00:07:56.824 "supported_io_types": { 00:07:56.824 "read": true, 00:07:56.824 "write": true, 00:07:56.824 "unmap": true, 00:07:56.824 "flush": true, 00:07:56.824 "reset": true, 00:07:56.824 "nvme_admin": false, 00:07:56.824 "nvme_io": false, 00:07:56.824 "nvme_io_md": false, 00:07:56.824 "write_zeroes": true, 00:07:56.824 "zcopy": true, 00:07:56.824 "get_zone_info": false, 00:07:56.824 "zone_management": false, 00:07:56.824 "zone_append": false, 00:07:56.824 "compare": false, 00:07:56.824 "compare_and_write": false, 00:07:56.824 "abort": true, 00:07:56.824 "seek_hole": false, 00:07:56.824 "seek_data": false, 00:07:56.824 "copy": true, 00:07:56.824 "nvme_iov_md": false 00:07:56.824 }, 00:07:56.824 "memory_domains": [ 00:07:56.824 { 00:07:56.824 "dma_device_id": "system", 00:07:56.824 "dma_device_type": 1 00:07:56.824 }, 00:07:56.824 { 00:07:56.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.824 "dma_device_type": 2 00:07:56.824 } 00:07:56.824 ], 00:07:56.824 "driver_specific": {} 00:07:56.824 }' 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:56.824 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:57.084 "name": "BaseBdev2", 00:07:57.084 "aliases": [ 00:07:57.084 "18daa602-42d7-11ef-9ade-d5fc5159efa5" 00:07:57.084 ], 00:07:57.084 "product_name": "Malloc disk", 00:07:57.084 "block_size": 512, 00:07:57.084 "num_blocks": 65536, 00:07:57.084 "uuid": "18daa602-42d7-11ef-9ade-d5fc5159efa5", 00:07:57.084 "assigned_rate_limits": { 00:07:57.084 "rw_ios_per_sec": 0, 00:07:57.084 "rw_mbytes_per_sec": 0, 00:07:57.084 "r_mbytes_per_sec": 0, 00:07:57.084 "w_mbytes_per_sec": 0 00:07:57.084 }, 00:07:57.084 "claimed": true, 00:07:57.084 "claim_type": "exclusive_write", 00:07:57.084 "zoned": false, 00:07:57.084 "supported_io_types": { 00:07:57.084 "read": true, 00:07:57.084 "write": true, 00:07:57.084 "unmap": true, 00:07:57.084 "flush": true, 00:07:57.084 "reset": true, 00:07:57.084 "nvme_admin": false, 00:07:57.084 "nvme_io": false, 00:07:57.084 "nvme_io_md": false, 00:07:57.084 "write_zeroes": true, 00:07:57.084 "zcopy": true, 00:07:57.084 "get_zone_info": false, 00:07:57.084 "zone_management": false, 00:07:57.084 "zone_append": false, 00:07:57.084 "compare": false, 00:07:57.084 "compare_and_write": false, 00:07:57.084 "abort": true, 00:07:57.084 "seek_hole": false, 00:07:57.084 "seek_data": false, 00:07:57.084 "copy": true, 00:07:57.084 "nvme_iov_md": false 00:07:57.084 }, 00:07:57.084 "memory_domains": [ 00:07:57.084 { 00:07:57.084 "dma_device_id": "system", 00:07:57.084 "dma_device_type": 1 00:07:57.084 }, 00:07:57.084 { 00:07:57.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.084 "dma_device_type": 2 00:07:57.084 } 00:07:57.084 ], 00:07:57.084 "driver_specific": {} 00:07:57.084 }' 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:57.084 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:57.343 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:57.343 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:57.343 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:57.343 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:57.343 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:57.601 [2024-07-15 18:21:49.756871] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.601 [2024-07-15 18:21:49.756899] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.601 [2024-07-15 18:21:49.756914] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.601 18:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.860 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:57.860 "name": "Existed_Raid", 00:07:57.860 "uuid": "18daad07-42d7-11ef-9ade-d5fc5159efa5", 00:07:57.860 "strip_size_kb": 64, 00:07:57.860 "state": "offline", 00:07:57.860 "raid_level": "concat", 00:07:57.860 "superblock": false, 00:07:57.860 "num_base_bdevs": 2, 00:07:57.860 "num_base_bdevs_discovered": 1, 00:07:57.860 "num_base_bdevs_operational": 1, 00:07:57.860 "base_bdevs_list": [ 00:07:57.860 { 00:07:57.860 "name": null, 00:07:57.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.860 "is_configured": false, 00:07:57.860 "data_offset": 0, 00:07:57.860 "data_size": 65536 00:07:57.860 }, 00:07:57.860 { 00:07:57.860 "name": "BaseBdev2", 00:07:57.860 "uuid": "18daa602-42d7-11ef-9ade-d5fc5159efa5", 00:07:57.860 "is_configured": true, 00:07:57.860 "data_offset": 0, 00:07:57.860 "data_size": 65536 00:07:57.860 } 00:07:57.860 ] 00:07:57.860 }' 00:07:57.860 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:57.860 18:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.118 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:58.118 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:58.118 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.118 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:58.377 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:58.377 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.377 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:58.635 [2024-07-15 18:21:50.875217] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.635 [2024-07-15 18:21:50.875252] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x223214a34a00 name Existed_Raid, state offline 00:07:58.635 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:58.635 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:58.635 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.635 18:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49720 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 49720 ']' 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 49720 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 49720 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:58.893 killing process with pid 49720 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49720' 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 49720 00:07:58.893 [2024-07-15 18:21:51.202861] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.893 [2024-07-15 18:21:51.202896] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.893 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 49720 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:59.152 00:07:59.152 real 0m9.285s 00:07:59.152 user 0m16.154s 00:07:59.152 sys 0m1.601s 00:07:59.152 ************************************ 00:07:59.152 END TEST raid_state_function_test 00:07:59.152 ************************************ 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.152 18:21:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:59.152 18:21:51 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:59.152 18:21:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:59.152 18:21:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.152 18:21:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.152 ************************************ 00:07:59.152 START TEST raid_state_function_test_sb 00:07:59.152 ************************************ 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=49991 00:07:59.152 Process raid pid: 49991 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49991' 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 49991 /var/tmp/spdk-raid.sock 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 49991 ']' 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:59.152 18:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.153 18:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.153 [2024-07-15 18:21:51.481311] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:59.153 [2024-07-15 18:21:51.481581] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:59.719 EAL: TSC is not safe to use in SMP mode 00:07:59.720 EAL: TSC is not invariant 00:07:59.720 [2024-07-15 18:21:52.087267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.978 [2024-07-15 18:21:52.201611] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:59.978 [2024-07-15 18:21:52.203818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.978 [2024-07-15 18:21:52.204698] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.978 [2024-07-15 18:21:52.204714] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.236 18:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.236 18:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:08:00.236 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:00.494 [2024-07-15 18:21:52.837166] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.494 [2024-07-15 18:21:52.837223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.494 [2024-07-15 18:21:52.837232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.494 [2024-07-15 18:21:52.837247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.494 18:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.752 18:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:00.752 "name": "Existed_Raid", 00:08:00.752 "uuid": "1c15dd77-42d7-11ef-9ade-d5fc5159efa5", 00:08:00.752 "strip_size_kb": 64, 00:08:00.752 "state": "configuring", 00:08:00.752 "raid_level": "concat", 00:08:00.752 "superblock": true, 00:08:00.752 "num_base_bdevs": 2, 00:08:00.752 "num_base_bdevs_discovered": 0, 00:08:00.752 "num_base_bdevs_operational": 2, 00:08:00.752 "base_bdevs_list": [ 00:08:00.752 { 00:08:00.752 "name": "BaseBdev1", 00:08:00.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.752 "is_configured": false, 00:08:00.752 "data_offset": 0, 00:08:00.752 "data_size": 0 00:08:00.752 }, 00:08:00.752 { 00:08:00.752 "name": "BaseBdev2", 00:08:00.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.752 "is_configured": false, 00:08:00.752 "data_offset": 0, 00:08:00.752 "data_size": 0 00:08:00.752 } 00:08:00.752 ] 00:08:00.752 }' 00:08:00.752 18:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:00.752 18:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.010 18:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:01.266 [2024-07-15 18:21:53.613422] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.266 [2024-07-15 18:21:53.613454] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x141c9dc34500 name Existed_Raid, state configuring 00:08:01.266 18:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:01.582 [2024-07-15 18:21:53.941566] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.582 [2024-07-15 18:21:53.941628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.582 [2024-07-15 18:21:53.941636] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.582 [2024-07-15 18:21:53.941647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.841 18:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.841 [2024-07-15 18:21:54.182663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.841 BaseBdev1 00:08:01.841 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:01.841 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:01.841 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:01.841 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:01.841 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:01.841 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:01.841 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:02.100 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.358 [ 00:08:02.358 { 00:08:02.358 "name": "BaseBdev1", 00:08:02.358 "aliases": [ 00:08:02.358 "1ce30446-42d7-11ef-9ade-d5fc5159efa5" 00:08:02.358 ], 00:08:02.358 "product_name": "Malloc disk", 00:08:02.358 "block_size": 512, 00:08:02.358 "num_blocks": 65536, 00:08:02.358 "uuid": "1ce30446-42d7-11ef-9ade-d5fc5159efa5", 00:08:02.358 "assigned_rate_limits": { 00:08:02.358 "rw_ios_per_sec": 0, 00:08:02.358 "rw_mbytes_per_sec": 0, 00:08:02.358 "r_mbytes_per_sec": 0, 00:08:02.358 "w_mbytes_per_sec": 0 00:08:02.358 }, 00:08:02.358 "claimed": true, 00:08:02.358 "claim_type": "exclusive_write", 00:08:02.358 "zoned": false, 00:08:02.358 "supported_io_types": { 00:08:02.358 "read": true, 00:08:02.358 "write": true, 00:08:02.358 "unmap": true, 00:08:02.358 "flush": true, 00:08:02.358 "reset": true, 00:08:02.358 "nvme_admin": false, 00:08:02.358 "nvme_io": false, 00:08:02.358 "nvme_io_md": false, 00:08:02.358 "write_zeroes": true, 00:08:02.358 "zcopy": true, 00:08:02.358 "get_zone_info": false, 00:08:02.358 "zone_management": false, 00:08:02.358 "zone_append": false, 00:08:02.358 "compare": false, 00:08:02.358 "compare_and_write": false, 00:08:02.358 "abort": true, 00:08:02.358 "seek_hole": false, 00:08:02.358 "seek_data": false, 00:08:02.358 "copy": true, 00:08:02.358 "nvme_iov_md": false 00:08:02.358 }, 00:08:02.358 "memory_domains": [ 00:08:02.358 { 00:08:02.358 "dma_device_id": "system", 00:08:02.358 "dma_device_type": 1 00:08:02.358 }, 00:08:02.358 { 00:08:02.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.358 "dma_device_type": 2 00:08:02.358 } 00:08:02.358 ], 00:08:02.358 "driver_specific": {} 00:08:02.358 } 00:08:02.358 ] 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.358 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.616 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:02.616 "name": "Existed_Raid", 00:08:02.616 "uuid": "1cbe6220-42d7-11ef-9ade-d5fc5159efa5", 00:08:02.616 "strip_size_kb": 64, 00:08:02.616 "state": "configuring", 00:08:02.616 "raid_level": "concat", 00:08:02.616 "superblock": true, 00:08:02.616 "num_base_bdevs": 2, 00:08:02.616 "num_base_bdevs_discovered": 1, 00:08:02.616 "num_base_bdevs_operational": 2, 00:08:02.616 "base_bdevs_list": [ 00:08:02.616 { 00:08:02.616 "name": "BaseBdev1", 00:08:02.616 "uuid": "1ce30446-42d7-11ef-9ade-d5fc5159efa5", 00:08:02.616 "is_configured": true, 00:08:02.616 "data_offset": 2048, 00:08:02.616 "data_size": 63488 00:08:02.616 }, 00:08:02.616 { 00:08:02.616 "name": "BaseBdev2", 00:08:02.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.616 "is_configured": false, 00:08:02.616 "data_offset": 0, 00:08:02.616 "data_size": 0 00:08:02.616 } 00:08:02.616 ] 00:08:02.616 }' 00:08:02.616 18:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:02.616 18:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.185 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:03.185 [2024-07-15 18:21:55.530126] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.185 [2024-07-15 18:21:55.530163] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x141c9dc34500 name Existed_Raid, state configuring 00:08:03.185 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:03.443 [2024-07-15 18:21:55.806243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.444 [2024-07-15 18:21:55.807085] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.444 [2024-07-15 18:21:55.807138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.702 18:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.702 18:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:03.702 "name": "Existed_Raid", 00:08:03.702 "uuid": "1ddae901-42d7-11ef-9ade-d5fc5159efa5", 00:08:03.702 "strip_size_kb": 64, 00:08:03.702 "state": "configuring", 00:08:03.702 "raid_level": "concat", 00:08:03.702 "superblock": true, 00:08:03.702 "num_base_bdevs": 2, 00:08:03.702 "num_base_bdevs_discovered": 1, 00:08:03.702 "num_base_bdevs_operational": 2, 00:08:03.702 "base_bdevs_list": [ 00:08:03.702 { 00:08:03.702 "name": "BaseBdev1", 00:08:03.702 "uuid": "1ce30446-42d7-11ef-9ade-d5fc5159efa5", 00:08:03.702 "is_configured": true, 00:08:03.702 "data_offset": 2048, 00:08:03.702 "data_size": 63488 00:08:03.702 }, 00:08:03.702 { 00:08:03.702 "name": "BaseBdev2", 00:08:03.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.702 "is_configured": false, 00:08:03.702 "data_offset": 0, 00:08:03.702 "data_size": 0 00:08:03.702 } 00:08:03.702 ] 00:08:03.702 }' 00:08:03.702 18:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:03.702 18:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.269 18:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.269 [2024-07-15 18:21:56.610657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.269 [2024-07-15 18:21:56.610742] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x141c9dc34a00 00:08:04.269 [2024-07-15 18:21:56.610748] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.269 [2024-07-15 18:21:56.610770] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x141c9dc97e20 00:08:04.269 [2024-07-15 18:21:56.610820] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x141c9dc34a00 00:08:04.269 [2024-07-15 18:21:56.610824] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x141c9dc34a00 00:08:04.269 [2024-07-15 18:21:56.610845] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.269 BaseBdev2 00:08:04.269 18:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:04.269 18:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:04.270 18:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:04.270 18:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:04.270 18:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:04.270 18:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:04.270 18:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:04.528 18:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.786 [ 00:08:04.786 { 00:08:04.786 "name": "BaseBdev2", 00:08:04.786 "aliases": [ 00:08:04.786 "1e55a275-42d7-11ef-9ade-d5fc5159efa5" 00:08:04.786 ], 00:08:04.786 "product_name": "Malloc disk", 00:08:04.786 "block_size": 512, 00:08:04.786 "num_blocks": 65536, 00:08:04.786 "uuid": "1e55a275-42d7-11ef-9ade-d5fc5159efa5", 00:08:04.786 "assigned_rate_limits": { 00:08:04.786 "rw_ios_per_sec": 0, 00:08:04.786 "rw_mbytes_per_sec": 0, 00:08:04.786 "r_mbytes_per_sec": 0, 00:08:04.786 "w_mbytes_per_sec": 0 00:08:04.786 }, 00:08:04.786 "claimed": true, 00:08:04.786 "claim_type": "exclusive_write", 00:08:04.786 "zoned": false, 00:08:04.786 "supported_io_types": { 00:08:04.786 "read": true, 00:08:04.786 "write": true, 00:08:04.786 "unmap": true, 00:08:04.786 "flush": true, 00:08:04.786 "reset": true, 00:08:04.786 "nvme_admin": false, 00:08:04.786 "nvme_io": false, 00:08:04.786 "nvme_io_md": false, 00:08:04.786 "write_zeroes": true, 00:08:04.786 "zcopy": true, 00:08:04.786 "get_zone_info": false, 00:08:04.786 "zone_management": false, 00:08:04.786 "zone_append": false, 00:08:04.786 "compare": false, 00:08:04.786 "compare_and_write": false, 00:08:04.786 "abort": true, 00:08:04.786 "seek_hole": false, 00:08:04.786 "seek_data": false, 00:08:04.786 "copy": true, 00:08:04.786 "nvme_iov_md": false 00:08:04.786 }, 00:08:04.786 "memory_domains": [ 00:08:04.786 { 00:08:04.786 "dma_device_id": "system", 00:08:04.786 "dma_device_type": 1 00:08:04.786 }, 00:08:04.786 { 00:08:04.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.786 "dma_device_type": 2 00:08:04.786 } 00:08:04.786 ], 00:08:04.786 "driver_specific": {} 00:08:04.786 } 00:08:04.786 ] 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.786 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.045 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:05.045 "name": "Existed_Raid", 00:08:05.045 "uuid": "1ddae901-42d7-11ef-9ade-d5fc5159efa5", 00:08:05.045 "strip_size_kb": 64, 00:08:05.045 "state": "online", 00:08:05.045 "raid_level": "concat", 00:08:05.045 "superblock": true, 00:08:05.045 "num_base_bdevs": 2, 00:08:05.045 "num_base_bdevs_discovered": 2, 00:08:05.045 "num_base_bdevs_operational": 2, 00:08:05.045 "base_bdevs_list": [ 00:08:05.045 { 00:08:05.045 "name": "BaseBdev1", 00:08:05.045 "uuid": "1ce30446-42d7-11ef-9ade-d5fc5159efa5", 00:08:05.045 "is_configured": true, 00:08:05.045 "data_offset": 2048, 00:08:05.045 "data_size": 63488 00:08:05.045 }, 00:08:05.045 { 00:08:05.045 "name": "BaseBdev2", 00:08:05.045 "uuid": "1e55a275-42d7-11ef-9ade-d5fc5159efa5", 00:08:05.045 "is_configured": true, 00:08:05.045 "data_offset": 2048, 00:08:05.045 "data_size": 63488 00:08:05.045 } 00:08:05.045 ] 00:08:05.045 }' 00:08:05.045 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:05.045 18:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:05.610 [2024-07-15 18:21:57.947011] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.610 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:05.610 "name": "Existed_Raid", 00:08:05.610 "aliases": [ 00:08:05.610 "1ddae901-42d7-11ef-9ade-d5fc5159efa5" 00:08:05.610 ], 00:08:05.610 "product_name": "Raid Volume", 00:08:05.610 "block_size": 512, 00:08:05.610 "num_blocks": 126976, 00:08:05.610 "uuid": "1ddae901-42d7-11ef-9ade-d5fc5159efa5", 00:08:05.610 "assigned_rate_limits": { 00:08:05.610 "rw_ios_per_sec": 0, 00:08:05.610 "rw_mbytes_per_sec": 0, 00:08:05.610 "r_mbytes_per_sec": 0, 00:08:05.610 "w_mbytes_per_sec": 0 00:08:05.610 }, 00:08:05.610 "claimed": false, 00:08:05.610 "zoned": false, 00:08:05.610 "supported_io_types": { 00:08:05.610 "read": true, 00:08:05.610 "write": true, 00:08:05.610 "unmap": true, 00:08:05.610 "flush": true, 00:08:05.610 "reset": true, 00:08:05.610 "nvme_admin": false, 00:08:05.610 "nvme_io": false, 00:08:05.610 "nvme_io_md": false, 00:08:05.610 "write_zeroes": true, 00:08:05.610 "zcopy": false, 00:08:05.610 "get_zone_info": false, 00:08:05.610 "zone_management": false, 00:08:05.610 "zone_append": false, 00:08:05.610 "compare": false, 00:08:05.610 "compare_and_write": false, 00:08:05.610 "abort": false, 00:08:05.610 "seek_hole": false, 00:08:05.610 "seek_data": false, 00:08:05.610 "copy": false, 00:08:05.610 "nvme_iov_md": false 00:08:05.610 }, 00:08:05.610 "memory_domains": [ 00:08:05.610 { 00:08:05.610 "dma_device_id": "system", 00:08:05.610 "dma_device_type": 1 00:08:05.610 }, 00:08:05.610 { 00:08:05.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.610 "dma_device_type": 2 00:08:05.610 }, 00:08:05.610 { 00:08:05.610 "dma_device_id": "system", 00:08:05.610 "dma_device_type": 1 00:08:05.610 }, 00:08:05.610 { 00:08:05.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.610 "dma_device_type": 2 00:08:05.610 } 00:08:05.610 ], 00:08:05.610 "driver_specific": { 00:08:05.610 "raid": { 00:08:05.610 "uuid": "1ddae901-42d7-11ef-9ade-d5fc5159efa5", 00:08:05.610 "strip_size_kb": 64, 00:08:05.610 "state": "online", 00:08:05.610 "raid_level": "concat", 00:08:05.610 "superblock": true, 00:08:05.610 "num_base_bdevs": 2, 00:08:05.610 "num_base_bdevs_discovered": 2, 00:08:05.610 "num_base_bdevs_operational": 2, 00:08:05.610 "base_bdevs_list": [ 00:08:05.610 { 00:08:05.610 "name": "BaseBdev1", 00:08:05.610 "uuid": "1ce30446-42d7-11ef-9ade-d5fc5159efa5", 00:08:05.610 "is_configured": true, 00:08:05.610 "data_offset": 2048, 00:08:05.610 "data_size": 63488 00:08:05.610 }, 00:08:05.610 { 00:08:05.610 "name": "BaseBdev2", 00:08:05.610 "uuid": "1e55a275-42d7-11ef-9ade-d5fc5159efa5", 00:08:05.610 "is_configured": true, 00:08:05.610 "data_offset": 2048, 00:08:05.610 "data_size": 63488 00:08:05.610 } 00:08:05.610 ] 00:08:05.610 } 00:08:05.610 } 00:08:05.610 }' 00:08:05.611 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.611 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:05.611 BaseBdev2' 00:08:05.611 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:05.611 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:05.611 18:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:05.870 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:05.870 "name": "BaseBdev1", 00:08:05.870 "aliases": [ 00:08:05.870 "1ce30446-42d7-11ef-9ade-d5fc5159efa5" 00:08:05.870 ], 00:08:05.870 "product_name": "Malloc disk", 00:08:05.870 "block_size": 512, 00:08:05.870 "num_blocks": 65536, 00:08:05.870 "uuid": "1ce30446-42d7-11ef-9ade-d5fc5159efa5", 00:08:05.870 "assigned_rate_limits": { 00:08:05.870 "rw_ios_per_sec": 0, 00:08:05.870 "rw_mbytes_per_sec": 0, 00:08:05.870 "r_mbytes_per_sec": 0, 00:08:05.870 "w_mbytes_per_sec": 0 00:08:05.870 }, 00:08:05.870 "claimed": true, 00:08:05.870 "claim_type": "exclusive_write", 00:08:05.870 "zoned": false, 00:08:05.870 "supported_io_types": { 00:08:05.870 "read": true, 00:08:05.870 "write": true, 00:08:05.870 "unmap": true, 00:08:05.870 "flush": true, 00:08:05.870 "reset": true, 00:08:05.870 "nvme_admin": false, 00:08:05.870 "nvme_io": false, 00:08:05.870 "nvme_io_md": false, 00:08:05.870 "write_zeroes": true, 00:08:05.870 "zcopy": true, 00:08:05.870 "get_zone_info": false, 00:08:05.870 "zone_management": false, 00:08:05.870 "zone_append": false, 00:08:05.870 "compare": false, 00:08:05.870 "compare_and_write": false, 00:08:05.870 "abort": true, 00:08:05.870 "seek_hole": false, 00:08:05.870 "seek_data": false, 00:08:05.870 "copy": true, 00:08:05.870 "nvme_iov_md": false 00:08:05.870 }, 00:08:05.870 "memory_domains": [ 00:08:05.870 { 00:08:05.870 "dma_device_id": "system", 00:08:05.870 "dma_device_type": 1 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.870 "dma_device_type": 2 00:08:05.870 } 00:08:05.870 ], 00:08:05.870 "driver_specific": {} 00:08:05.870 }' 00:08:05.870 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:05.870 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:06.128 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:06.387 "name": "BaseBdev2", 00:08:06.387 "aliases": [ 00:08:06.387 "1e55a275-42d7-11ef-9ade-d5fc5159efa5" 00:08:06.387 ], 00:08:06.387 "product_name": "Malloc disk", 00:08:06.387 "block_size": 512, 00:08:06.387 "num_blocks": 65536, 00:08:06.387 "uuid": "1e55a275-42d7-11ef-9ade-d5fc5159efa5", 00:08:06.387 "assigned_rate_limits": { 00:08:06.387 "rw_ios_per_sec": 0, 00:08:06.387 "rw_mbytes_per_sec": 0, 00:08:06.387 "r_mbytes_per_sec": 0, 00:08:06.387 "w_mbytes_per_sec": 0 00:08:06.387 }, 00:08:06.387 "claimed": true, 00:08:06.387 "claim_type": "exclusive_write", 00:08:06.387 "zoned": false, 00:08:06.387 "supported_io_types": { 00:08:06.387 "read": true, 00:08:06.387 "write": true, 00:08:06.387 "unmap": true, 00:08:06.387 "flush": true, 00:08:06.387 "reset": true, 00:08:06.387 "nvme_admin": false, 00:08:06.387 "nvme_io": false, 00:08:06.387 "nvme_io_md": false, 00:08:06.387 "write_zeroes": true, 00:08:06.387 "zcopy": true, 00:08:06.387 "get_zone_info": false, 00:08:06.387 "zone_management": false, 00:08:06.387 "zone_append": false, 00:08:06.387 "compare": false, 00:08:06.387 "compare_and_write": false, 00:08:06.387 "abort": true, 00:08:06.387 "seek_hole": false, 00:08:06.387 "seek_data": false, 00:08:06.387 "copy": true, 00:08:06.387 "nvme_iov_md": false 00:08:06.387 }, 00:08:06.387 "memory_domains": [ 00:08:06.387 { 00:08:06.387 "dma_device_id": "system", 00:08:06.387 "dma_device_type": 1 00:08:06.387 }, 00:08:06.387 { 00:08:06.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.387 "dma_device_type": 2 00:08:06.387 } 00:08:06.387 ], 00:08:06.387 "driver_specific": {} 00:08:06.387 }' 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:06.387 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:06.646 [2024-07-15 18:21:58.879272] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.646 [2024-07-15 18:21:58.879298] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.646 [2024-07-15 18:21:58.879313] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.646 18:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.905 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:06.905 "name": "Existed_Raid", 00:08:06.905 "uuid": "1ddae901-42d7-11ef-9ade-d5fc5159efa5", 00:08:06.905 "strip_size_kb": 64, 00:08:06.905 "state": "offline", 00:08:06.905 "raid_level": "concat", 00:08:06.905 "superblock": true, 00:08:06.905 "num_base_bdevs": 2, 00:08:06.905 "num_base_bdevs_discovered": 1, 00:08:06.905 "num_base_bdevs_operational": 1, 00:08:06.905 "base_bdevs_list": [ 00:08:06.905 { 00:08:06.905 "name": null, 00:08:06.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.905 "is_configured": false, 00:08:06.905 "data_offset": 2048, 00:08:06.905 "data_size": 63488 00:08:06.905 }, 00:08:06.905 { 00:08:06.905 "name": "BaseBdev2", 00:08:06.905 "uuid": "1e55a275-42d7-11ef-9ade-d5fc5159efa5", 00:08:06.905 "is_configured": true, 00:08:06.905 "data_offset": 2048, 00:08:06.905 "data_size": 63488 00:08:06.905 } 00:08:06.905 ] 00:08:06.905 }' 00:08:06.905 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:06.905 18:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.219 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:07.219 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:07.219 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.219 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:07.477 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:07.477 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.477 18:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:07.735 [2024-07-15 18:21:59.981469] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.735 [2024-07-15 18:21:59.981503] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x141c9dc34a00 name Existed_Raid, state offline 00:08:07.735 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:07.735 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:07.735 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.735 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 49991 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 49991 ']' 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 49991 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 49991 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:07.993 killing process with pid 49991 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49991' 00:08:07.993 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 49991 00:08:07.993 [2024-07-15 18:22:00.251076] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.993 [2024-07-15 18:22:00.251112] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.994 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 49991 00:08:08.253 18:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:08.253 00:08:08.253 real 0m9.003s 00:08:08.253 user 0m15.626s 00:08:08.253 sys 0m1.565s 00:08:08.253 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.253 18:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.253 ************************************ 00:08:08.253 END TEST raid_state_function_test_sb 00:08:08.253 ************************************ 00:08:08.253 18:22:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:08.253 18:22:00 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:08.253 18:22:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.253 18:22:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.253 18:22:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.253 ************************************ 00:08:08.253 START TEST raid_superblock_test 00:08:08.253 ************************************ 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=50265 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 50265 /var/tmp/spdk-raid.sock 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 50265 ']' 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.253 18:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.253 [2024-07-15 18:22:00.524819] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:08.253 [2024-07-15 18:22:00.525025] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:08.819 EAL: TSC is not safe to use in SMP mode 00:08:08.819 EAL: TSC is not invariant 00:08:08.819 [2024-07-15 18:22:01.144906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.076 [2024-07-15 18:22:01.255492] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:09.076 [2024-07-15 18:22:01.257600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.076 [2024-07-15 18:22:01.258376] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.076 [2024-07-15 18:22:01.258390] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.334 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:09.593 malloc1 00:08:09.593 18:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:09.852 [2024-07-15 18:22:02.038899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:09.852 [2024-07-15 18:22:02.038972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.852 [2024-07-15 18:22:02.039009] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x219749634780 00:08:09.852 [2024-07-15 18:22:02.039019] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.852 [2024-07-15 18:22:02.039933] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.852 [2024-07-15 18:22:02.039960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:09.852 pt1 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.852 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:10.110 malloc2 00:08:10.110 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:10.368 [2024-07-15 18:22:02.551045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:10.368 [2024-07-15 18:22:02.551101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.368 [2024-07-15 18:22:02.551114] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x219749634c80 00:08:10.368 [2024-07-15 18:22:02.551122] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.368 [2024-07-15 18:22:02.551799] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.368 [2024-07-15 18:22:02.551829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:10.368 pt2 00:08:10.368 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:10.368 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:10.368 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:08:10.625 [2024-07-15 18:22:02.831139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:10.625 [2024-07-15 18:22:02.831771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:10.625 [2024-07-15 18:22:02.831836] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x219749634f00 00:08:10.625 [2024-07-15 18:22:02.831842] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.625 [2024-07-15 18:22:02.831885] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x219749697e20 00:08:10.625 [2024-07-15 18:22:02.831969] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x219749634f00 00:08:10.625 [2024-07-15 18:22:02.831974] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x219749634f00 00:08:10.625 [2024-07-15 18:22:02.832004] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.625 18:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.883 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:10.883 "name": "raid_bdev1", 00:08:10.883 "uuid": "220ad2ff-42d7-11ef-9ade-d5fc5159efa5", 00:08:10.883 "strip_size_kb": 64, 00:08:10.883 "state": "online", 00:08:10.883 "raid_level": "concat", 00:08:10.883 "superblock": true, 00:08:10.883 "num_base_bdevs": 2, 00:08:10.883 "num_base_bdevs_discovered": 2, 00:08:10.883 "num_base_bdevs_operational": 2, 00:08:10.883 "base_bdevs_list": [ 00:08:10.883 { 00:08:10.883 "name": "pt1", 00:08:10.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.883 "is_configured": true, 00:08:10.883 "data_offset": 2048, 00:08:10.883 "data_size": 63488 00:08:10.883 }, 00:08:10.883 { 00:08:10.883 "name": "pt2", 00:08:10.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.883 "is_configured": true, 00:08:10.883 "data_offset": 2048, 00:08:10.883 "data_size": 63488 00:08:10.883 } 00:08:10.883 ] 00:08:10.883 }' 00:08:10.883 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:10.883 18:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.141 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.141 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:11.141 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:11.141 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:11.141 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:11.141 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:11.141 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:11.141 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:11.399 [2024-07-15 18:22:03.635410] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.399 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:11.399 "name": "raid_bdev1", 00:08:11.399 "aliases": [ 00:08:11.399 "220ad2ff-42d7-11ef-9ade-d5fc5159efa5" 00:08:11.399 ], 00:08:11.399 "product_name": "Raid Volume", 00:08:11.399 "block_size": 512, 00:08:11.399 "num_blocks": 126976, 00:08:11.399 "uuid": "220ad2ff-42d7-11ef-9ade-d5fc5159efa5", 00:08:11.399 "assigned_rate_limits": { 00:08:11.399 "rw_ios_per_sec": 0, 00:08:11.399 "rw_mbytes_per_sec": 0, 00:08:11.399 "r_mbytes_per_sec": 0, 00:08:11.399 "w_mbytes_per_sec": 0 00:08:11.399 }, 00:08:11.399 "claimed": false, 00:08:11.399 "zoned": false, 00:08:11.399 "supported_io_types": { 00:08:11.399 "read": true, 00:08:11.399 "write": true, 00:08:11.399 "unmap": true, 00:08:11.399 "flush": true, 00:08:11.399 "reset": true, 00:08:11.399 "nvme_admin": false, 00:08:11.399 "nvme_io": false, 00:08:11.399 "nvme_io_md": false, 00:08:11.399 "write_zeroes": true, 00:08:11.399 "zcopy": false, 00:08:11.399 "get_zone_info": false, 00:08:11.399 "zone_management": false, 00:08:11.399 "zone_append": false, 00:08:11.399 "compare": false, 00:08:11.399 "compare_and_write": false, 00:08:11.399 "abort": false, 00:08:11.399 "seek_hole": false, 00:08:11.399 "seek_data": false, 00:08:11.399 "copy": false, 00:08:11.399 "nvme_iov_md": false 00:08:11.399 }, 00:08:11.399 "memory_domains": [ 00:08:11.399 { 00:08:11.399 "dma_device_id": "system", 00:08:11.399 "dma_device_type": 1 00:08:11.399 }, 00:08:11.399 { 00:08:11.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.399 "dma_device_type": 2 00:08:11.399 }, 00:08:11.399 { 00:08:11.399 "dma_device_id": "system", 00:08:11.399 "dma_device_type": 1 00:08:11.399 }, 00:08:11.399 { 00:08:11.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.399 "dma_device_type": 2 00:08:11.399 } 00:08:11.399 ], 00:08:11.399 "driver_specific": { 00:08:11.399 "raid": { 00:08:11.399 "uuid": "220ad2ff-42d7-11ef-9ade-d5fc5159efa5", 00:08:11.399 "strip_size_kb": 64, 00:08:11.399 "state": "online", 00:08:11.399 "raid_level": "concat", 00:08:11.399 "superblock": true, 00:08:11.399 "num_base_bdevs": 2, 00:08:11.399 "num_base_bdevs_discovered": 2, 00:08:11.399 "num_base_bdevs_operational": 2, 00:08:11.399 "base_bdevs_list": [ 00:08:11.399 { 00:08:11.399 "name": "pt1", 00:08:11.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.399 "is_configured": true, 00:08:11.399 "data_offset": 2048, 00:08:11.399 "data_size": 63488 00:08:11.399 }, 00:08:11.399 { 00:08:11.399 "name": "pt2", 00:08:11.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.399 "is_configured": true, 00:08:11.399 "data_offset": 2048, 00:08:11.399 "data_size": 63488 00:08:11.399 } 00:08:11.399 ] 00:08:11.399 } 00:08:11.399 } 00:08:11.399 }' 00:08:11.399 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.399 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:11.399 pt2' 00:08:11.399 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:11.399 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:11.399 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:11.658 "name": "pt1", 00:08:11.658 "aliases": [ 00:08:11.658 "00000000-0000-0000-0000-000000000001" 00:08:11.658 ], 00:08:11.658 "product_name": "passthru", 00:08:11.658 "block_size": 512, 00:08:11.658 "num_blocks": 65536, 00:08:11.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.658 "assigned_rate_limits": { 00:08:11.658 "rw_ios_per_sec": 0, 00:08:11.658 "rw_mbytes_per_sec": 0, 00:08:11.658 "r_mbytes_per_sec": 0, 00:08:11.658 "w_mbytes_per_sec": 0 00:08:11.658 }, 00:08:11.658 "claimed": true, 00:08:11.658 "claim_type": "exclusive_write", 00:08:11.658 "zoned": false, 00:08:11.658 "supported_io_types": { 00:08:11.658 "read": true, 00:08:11.658 "write": true, 00:08:11.658 "unmap": true, 00:08:11.658 "flush": true, 00:08:11.658 "reset": true, 00:08:11.658 "nvme_admin": false, 00:08:11.658 "nvme_io": false, 00:08:11.658 "nvme_io_md": false, 00:08:11.658 "write_zeroes": true, 00:08:11.658 "zcopy": true, 00:08:11.658 "get_zone_info": false, 00:08:11.658 "zone_management": false, 00:08:11.658 "zone_append": false, 00:08:11.658 "compare": false, 00:08:11.658 "compare_and_write": false, 00:08:11.658 "abort": true, 00:08:11.658 "seek_hole": false, 00:08:11.658 "seek_data": false, 00:08:11.658 "copy": true, 00:08:11.658 "nvme_iov_md": false 00:08:11.658 }, 00:08:11.658 "memory_domains": [ 00:08:11.658 { 00:08:11.658 "dma_device_id": "system", 00:08:11.658 "dma_device_type": 1 00:08:11.658 }, 00:08:11.658 { 00:08:11.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.658 "dma_device_type": 2 00:08:11.658 } 00:08:11.658 ], 00:08:11.658 "driver_specific": { 00:08:11.658 "passthru": { 00:08:11.658 "name": "pt1", 00:08:11.658 "base_bdev_name": "malloc1" 00:08:11.658 } 00:08:11.658 } 00:08:11.658 }' 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.658 18:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.658 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:11.658 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:11.658 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:11.658 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:11.958 "name": "pt2", 00:08:11.958 "aliases": [ 00:08:11.958 "00000000-0000-0000-0000-000000000002" 00:08:11.958 ], 00:08:11.958 "product_name": "passthru", 00:08:11.958 "block_size": 512, 00:08:11.958 "num_blocks": 65536, 00:08:11.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.958 "assigned_rate_limits": { 00:08:11.958 "rw_ios_per_sec": 0, 00:08:11.958 "rw_mbytes_per_sec": 0, 00:08:11.958 "r_mbytes_per_sec": 0, 00:08:11.958 "w_mbytes_per_sec": 0 00:08:11.958 }, 00:08:11.958 "claimed": true, 00:08:11.958 "claim_type": "exclusive_write", 00:08:11.958 "zoned": false, 00:08:11.958 "supported_io_types": { 00:08:11.958 "read": true, 00:08:11.958 "write": true, 00:08:11.958 "unmap": true, 00:08:11.958 "flush": true, 00:08:11.958 "reset": true, 00:08:11.958 "nvme_admin": false, 00:08:11.958 "nvme_io": false, 00:08:11.958 "nvme_io_md": false, 00:08:11.958 "write_zeroes": true, 00:08:11.958 "zcopy": true, 00:08:11.958 "get_zone_info": false, 00:08:11.958 "zone_management": false, 00:08:11.958 "zone_append": false, 00:08:11.958 "compare": false, 00:08:11.958 "compare_and_write": false, 00:08:11.958 "abort": true, 00:08:11.958 "seek_hole": false, 00:08:11.958 "seek_data": false, 00:08:11.958 "copy": true, 00:08:11.958 "nvme_iov_md": false 00:08:11.958 }, 00:08:11.958 "memory_domains": [ 00:08:11.958 { 00:08:11.958 "dma_device_id": "system", 00:08:11.958 "dma_device_type": 1 00:08:11.958 }, 00:08:11.958 { 00:08:11.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.958 "dma_device_type": 2 00:08:11.958 } 00:08:11.958 ], 00:08:11.958 "driver_specific": { 00:08:11.958 "passthru": { 00:08:11.958 "name": "pt2", 00:08:11.958 "base_bdev_name": "malloc2" 00:08:11.958 } 00:08:11.958 } 00:08:11.958 }' 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:11.958 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:12.216 [2024-07-15 18:22:04.539691] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.216 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=220ad2ff-42d7-11ef-9ade-d5fc5159efa5 00:08:12.216 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 220ad2ff-42d7-11ef-9ade-d5fc5159efa5 ']' 00:08:12.216 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:12.474 [2024-07-15 18:22:04.815696] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.474 [2024-07-15 18:22:04.815722] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.474 [2024-07-15 18:22:04.815770] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.474 [2024-07-15 18:22:04.815783] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.474 [2024-07-15 18:22:04.815788] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x219749634f00 name raid_bdev1, state offline 00:08:12.474 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.474 18:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:12.732 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:12.732 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:12.732 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.732 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:12.990 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.990 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:13.248 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:13.248 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:13.506 18:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:13.765 [2024-07-15 18:22:06.028102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:13.765 [2024-07-15 18:22:06.028706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:13.765 [2024-07-15 18:22:06.028732] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:13.765 [2024-07-15 18:22:06.028771] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:13.765 [2024-07-15 18:22:06.028783] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.765 [2024-07-15 18:22:06.028788] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x219749634c80 name raid_bdev1, state configuring 00:08:13.765 request: 00:08:13.765 { 00:08:13.765 "name": "raid_bdev1", 00:08:13.765 "raid_level": "concat", 00:08:13.765 "base_bdevs": [ 00:08:13.765 "malloc1", 00:08:13.765 "malloc2" 00:08:13.765 ], 00:08:13.765 "strip_size_kb": 64, 00:08:13.765 "superblock": false, 00:08:13.765 "method": "bdev_raid_create", 00:08:13.765 "req_id": 1 00:08:13.765 } 00:08:13.765 Got JSON-RPC error response 00:08:13.765 response: 00:08:13.765 { 00:08:13.765 "code": -17, 00:08:13.765 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:13.765 } 00:08:13.765 18:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:08:13.765 18:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.766 18:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.766 18:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.766 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.766 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:14.025 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:14.025 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:14.025 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.284 [2024-07-15 18:22:06.544258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.284 [2024-07-15 18:22:06.544317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.284 [2024-07-15 18:22:06.544330] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x219749634780 00:08:14.284 [2024-07-15 18:22:06.544338] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.284 [2024-07-15 18:22:06.545024] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.284 [2024-07-15 18:22:06.545053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.284 [2024-07-15 18:22:06.545079] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:14.284 [2024-07-15 18:22:06.545091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.284 pt1 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.284 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.542 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.542 "name": "raid_bdev1", 00:08:14.542 "uuid": "220ad2ff-42d7-11ef-9ade-d5fc5159efa5", 00:08:14.542 "strip_size_kb": 64, 00:08:14.542 "state": "configuring", 00:08:14.542 "raid_level": "concat", 00:08:14.542 "superblock": true, 00:08:14.542 "num_base_bdevs": 2, 00:08:14.542 "num_base_bdevs_discovered": 1, 00:08:14.542 "num_base_bdevs_operational": 2, 00:08:14.542 "base_bdevs_list": [ 00:08:14.542 { 00:08:14.542 "name": "pt1", 00:08:14.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.542 "is_configured": true, 00:08:14.542 "data_offset": 2048, 00:08:14.542 "data_size": 63488 00:08:14.542 }, 00:08:14.542 { 00:08:14.542 "name": null, 00:08:14.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.542 "is_configured": false, 00:08:14.542 "data_offset": 2048, 00:08:14.542 "data_size": 63488 00:08:14.542 } 00:08:14.542 ] 00:08:14.542 }' 00:08:14.542 18:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.542 18:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.801 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:14.801 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:14.801 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:14.801 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.061 [2024-07-15 18:22:07.376475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.061 [2024-07-15 18:22:07.376541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.061 [2024-07-15 18:22:07.376553] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x219749634f00 00:08:15.061 [2024-07-15 18:22:07.376561] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.061 [2024-07-15 18:22:07.376680] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.061 [2024-07-15 18:22:07.376692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.061 [2024-07-15 18:22:07.376715] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:15.061 [2024-07-15 18:22:07.376723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.061 [2024-07-15 18:22:07.376749] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x219749635180 00:08:15.061 [2024-07-15 18:22:07.376753] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.061 [2024-07-15 18:22:07.376773] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x219749697e20 00:08:15.061 [2024-07-15 18:22:07.376829] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x219749635180 00:08:15.061 [2024-07-15 18:22:07.376833] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x219749635180 00:08:15.061 [2024-07-15 18:22:07.376856] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.061 pt2 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:15.061 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:15.062 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:15.062 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:15.062 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:15.062 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.351 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:15.351 "name": "raid_bdev1", 00:08:15.351 "uuid": "220ad2ff-42d7-11ef-9ade-d5fc5159efa5", 00:08:15.351 "strip_size_kb": 64, 00:08:15.351 "state": "online", 00:08:15.351 "raid_level": "concat", 00:08:15.351 "superblock": true, 00:08:15.351 "num_base_bdevs": 2, 00:08:15.351 "num_base_bdevs_discovered": 2, 00:08:15.351 "num_base_bdevs_operational": 2, 00:08:15.351 "base_bdevs_list": [ 00:08:15.351 { 00:08:15.351 "name": "pt1", 00:08:15.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.351 "is_configured": true, 00:08:15.351 "data_offset": 2048, 00:08:15.351 "data_size": 63488 00:08:15.351 }, 00:08:15.351 { 00:08:15.351 "name": "pt2", 00:08:15.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.351 "is_configured": true, 00:08:15.351 "data_offset": 2048, 00:08:15.351 "data_size": 63488 00:08:15.351 } 00:08:15.351 ] 00:08:15.351 }' 00:08:15.351 18:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:15.351 18:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:15.927 [2024-07-15 18:22:08.244741] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:15.927 "name": "raid_bdev1", 00:08:15.927 "aliases": [ 00:08:15.927 "220ad2ff-42d7-11ef-9ade-d5fc5159efa5" 00:08:15.927 ], 00:08:15.927 "product_name": "Raid Volume", 00:08:15.927 "block_size": 512, 00:08:15.927 "num_blocks": 126976, 00:08:15.927 "uuid": "220ad2ff-42d7-11ef-9ade-d5fc5159efa5", 00:08:15.927 "assigned_rate_limits": { 00:08:15.927 "rw_ios_per_sec": 0, 00:08:15.927 "rw_mbytes_per_sec": 0, 00:08:15.927 "r_mbytes_per_sec": 0, 00:08:15.927 "w_mbytes_per_sec": 0 00:08:15.927 }, 00:08:15.927 "claimed": false, 00:08:15.927 "zoned": false, 00:08:15.927 "supported_io_types": { 00:08:15.927 "read": true, 00:08:15.927 "write": true, 00:08:15.927 "unmap": true, 00:08:15.927 "flush": true, 00:08:15.927 "reset": true, 00:08:15.927 "nvme_admin": false, 00:08:15.927 "nvme_io": false, 00:08:15.927 "nvme_io_md": false, 00:08:15.927 "write_zeroes": true, 00:08:15.927 "zcopy": false, 00:08:15.927 "get_zone_info": false, 00:08:15.927 "zone_management": false, 00:08:15.927 "zone_append": false, 00:08:15.927 "compare": false, 00:08:15.927 "compare_and_write": false, 00:08:15.927 "abort": false, 00:08:15.927 "seek_hole": false, 00:08:15.927 "seek_data": false, 00:08:15.927 "copy": false, 00:08:15.927 "nvme_iov_md": false 00:08:15.927 }, 00:08:15.927 "memory_domains": [ 00:08:15.927 { 00:08:15.927 "dma_device_id": "system", 00:08:15.927 "dma_device_type": 1 00:08:15.927 }, 00:08:15.927 { 00:08:15.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.927 "dma_device_type": 2 00:08:15.927 }, 00:08:15.927 { 00:08:15.927 "dma_device_id": "system", 00:08:15.927 "dma_device_type": 1 00:08:15.927 }, 00:08:15.927 { 00:08:15.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.927 "dma_device_type": 2 00:08:15.927 } 00:08:15.927 ], 00:08:15.927 "driver_specific": { 00:08:15.927 "raid": { 00:08:15.927 "uuid": "220ad2ff-42d7-11ef-9ade-d5fc5159efa5", 00:08:15.927 "strip_size_kb": 64, 00:08:15.927 "state": "online", 00:08:15.927 "raid_level": "concat", 00:08:15.927 "superblock": true, 00:08:15.927 "num_base_bdevs": 2, 00:08:15.927 "num_base_bdevs_discovered": 2, 00:08:15.927 "num_base_bdevs_operational": 2, 00:08:15.927 "base_bdevs_list": [ 00:08:15.927 { 00:08:15.927 "name": "pt1", 00:08:15.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.927 "is_configured": true, 00:08:15.927 "data_offset": 2048, 00:08:15.927 "data_size": 63488 00:08:15.927 }, 00:08:15.927 { 00:08:15.927 "name": "pt2", 00:08:15.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.927 "is_configured": true, 00:08:15.927 "data_offset": 2048, 00:08:15.927 "data_size": 63488 00:08:15.927 } 00:08:15.927 ] 00:08:15.927 } 00:08:15.927 } 00:08:15.927 }' 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:15.927 pt2' 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:15.927 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:16.186 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:16.186 "name": "pt1", 00:08:16.186 "aliases": [ 00:08:16.186 "00000000-0000-0000-0000-000000000001" 00:08:16.186 ], 00:08:16.186 "product_name": "passthru", 00:08:16.186 "block_size": 512, 00:08:16.186 "num_blocks": 65536, 00:08:16.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.186 "assigned_rate_limits": { 00:08:16.186 "rw_ios_per_sec": 0, 00:08:16.186 "rw_mbytes_per_sec": 0, 00:08:16.186 "r_mbytes_per_sec": 0, 00:08:16.186 "w_mbytes_per_sec": 0 00:08:16.186 }, 00:08:16.186 "claimed": true, 00:08:16.186 "claim_type": "exclusive_write", 00:08:16.186 "zoned": false, 00:08:16.186 "supported_io_types": { 00:08:16.186 "read": true, 00:08:16.186 "write": true, 00:08:16.186 "unmap": true, 00:08:16.186 "flush": true, 00:08:16.186 "reset": true, 00:08:16.186 "nvme_admin": false, 00:08:16.186 "nvme_io": false, 00:08:16.186 "nvme_io_md": false, 00:08:16.186 "write_zeroes": true, 00:08:16.186 "zcopy": true, 00:08:16.186 "get_zone_info": false, 00:08:16.186 "zone_management": false, 00:08:16.186 "zone_append": false, 00:08:16.186 "compare": false, 00:08:16.186 "compare_and_write": false, 00:08:16.186 "abort": true, 00:08:16.186 "seek_hole": false, 00:08:16.186 "seek_data": false, 00:08:16.186 "copy": true, 00:08:16.186 "nvme_iov_md": false 00:08:16.186 }, 00:08:16.186 "memory_domains": [ 00:08:16.186 { 00:08:16.186 "dma_device_id": "system", 00:08:16.186 "dma_device_type": 1 00:08:16.186 }, 00:08:16.186 { 00:08:16.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.186 "dma_device_type": 2 00:08:16.186 } 00:08:16.186 ], 00:08:16.186 "driver_specific": { 00:08:16.186 "passthru": { 00:08:16.186 "name": "pt1", 00:08:16.186 "base_bdev_name": "malloc1" 00:08:16.186 } 00:08:16.186 } 00:08:16.186 }' 00:08:16.186 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:16.444 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:16.703 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:16.703 "name": "pt2", 00:08:16.703 "aliases": [ 00:08:16.703 "00000000-0000-0000-0000-000000000002" 00:08:16.703 ], 00:08:16.703 "product_name": "passthru", 00:08:16.703 "block_size": 512, 00:08:16.703 "num_blocks": 65536, 00:08:16.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.703 "assigned_rate_limits": { 00:08:16.703 "rw_ios_per_sec": 0, 00:08:16.703 "rw_mbytes_per_sec": 0, 00:08:16.703 "r_mbytes_per_sec": 0, 00:08:16.703 "w_mbytes_per_sec": 0 00:08:16.703 }, 00:08:16.703 "claimed": true, 00:08:16.703 "claim_type": "exclusive_write", 00:08:16.703 "zoned": false, 00:08:16.703 "supported_io_types": { 00:08:16.703 "read": true, 00:08:16.703 "write": true, 00:08:16.703 "unmap": true, 00:08:16.703 "flush": true, 00:08:16.703 "reset": true, 00:08:16.703 "nvme_admin": false, 00:08:16.703 "nvme_io": false, 00:08:16.703 "nvme_io_md": false, 00:08:16.703 "write_zeroes": true, 00:08:16.703 "zcopy": true, 00:08:16.703 "get_zone_info": false, 00:08:16.703 "zone_management": false, 00:08:16.703 "zone_append": false, 00:08:16.703 "compare": false, 00:08:16.703 "compare_and_write": false, 00:08:16.703 "abort": true, 00:08:16.703 "seek_hole": false, 00:08:16.703 "seek_data": false, 00:08:16.703 "copy": true, 00:08:16.703 "nvme_iov_md": false 00:08:16.703 }, 00:08:16.704 "memory_domains": [ 00:08:16.704 { 00:08:16.704 "dma_device_id": "system", 00:08:16.704 "dma_device_type": 1 00:08:16.704 }, 00:08:16.704 { 00:08:16.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.704 "dma_device_type": 2 00:08:16.704 } 00:08:16.704 ], 00:08:16.704 "driver_specific": { 00:08:16.704 "passthru": { 00:08:16.704 "name": "pt2", 00:08:16.704 "base_bdev_name": "malloc2" 00:08:16.704 } 00:08:16.704 } 00:08:16.704 }' 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:16.704 18:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:16.962 [2024-07-15 18:22:09.184981] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 220ad2ff-42d7-11ef-9ade-d5fc5159efa5 '!=' 220ad2ff-42d7-11ef-9ade-d5fc5159efa5 ']' 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 50265 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 50265 ']' 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 50265 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 50265 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:16.962 killing process with pid 50265 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50265' 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 50265 00:08:16.962 [2024-07-15 18:22:09.215759] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.962 [2024-07-15 18:22:09.215789] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.962 [2024-07-15 18:22:09.215802] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.962 [2024-07-15 18:22:09.215807] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x219749635180 name raid_bdev1, state offline 00:08:16.962 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 50265 00:08:16.962 [2024-07-15 18:22:09.230315] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.219 18:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:17.219 00:08:17.219 real 0m8.940s 00:08:17.219 user 0m15.436s 00:08:17.219 sys 0m1.635s 00:08:17.219 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.219 18:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.219 ************************************ 00:08:17.219 END TEST raid_superblock_test 00:08:17.219 ************************************ 00:08:17.219 18:22:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:17.219 18:22:09 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:17.219 18:22:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:17.219 18:22:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.219 18:22:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.219 ************************************ 00:08:17.219 START TEST raid_read_error_test 00:08:17.219 ************************************ 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:17.219 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.O6ftjMA3nX 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50530 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50530 /var/tmp/spdk-raid.sock 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 50530 ']' 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.220 18:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.220 [2024-07-15 18:22:09.523465] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:17.220 [2024-07-15 18:22:09.523689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:17.783 EAL: TSC is not safe to use in SMP mode 00:08:17.783 EAL: TSC is not invariant 00:08:17.784 [2024-07-15 18:22:10.118024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.041 [2024-07-15 18:22:10.227586] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:18.041 [2024-07-15 18:22:10.229892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.041 [2024-07-15 18:22:10.230668] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.041 [2024-07-15 18:22:10.230683] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.298 18:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.298 18:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:18.298 18:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:18.298 18:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.556 BaseBdev1_malloc 00:08:18.556 18:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:18.815 true 00:08:18.815 18:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:19.074 [2024-07-15 18:22:11.383009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:19.074 [2024-07-15 18:22:11.383079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.074 [2024-07-15 18:22:11.383109] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e197834780 00:08:19.074 [2024-07-15 18:22:11.383118] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.074 [2024-07-15 18:22:11.383858] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.074 [2024-07-15 18:22:11.383883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:19.074 BaseBdev1 00:08:19.074 18:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:19.074 18:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:19.360 BaseBdev2_malloc 00:08:19.360 18:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:19.634 true 00:08:19.634 18:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:19.892 [2024-07-15 18:22:12.203199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:19.892 [2024-07-15 18:22:12.203255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.892 [2024-07-15 18:22:12.203282] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e197834c80 00:08:19.892 [2024-07-15 18:22:12.203291] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.892 [2024-07-15 18:22:12.204034] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.892 [2024-07-15 18:22:12.204058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:19.892 BaseBdev2 00:08:19.892 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:20.150 [2024-07-15 18:22:12.443264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.150 [2024-07-15 18:22:12.443911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.150 [2024-07-15 18:22:12.443982] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e197834f00 00:08:20.150 [2024-07-15 18:22:12.443989] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:20.150 [2024-07-15 18:22:12.444023] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e1978a0e20 00:08:20.150 [2024-07-15 18:22:12.444121] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e197834f00 00:08:20.150 [2024-07-15 18:22:12.444126] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e197834f00 00:08:20.150 [2024-07-15 18:22:12.444156] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.150 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.479 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:20.479 "name": "raid_bdev1", 00:08:20.479 "uuid": "27c5849a-42d7-11ef-9ade-d5fc5159efa5", 00:08:20.479 "strip_size_kb": 64, 00:08:20.479 "state": "online", 00:08:20.479 "raid_level": "concat", 00:08:20.479 "superblock": true, 00:08:20.479 "num_base_bdevs": 2, 00:08:20.479 "num_base_bdevs_discovered": 2, 00:08:20.479 "num_base_bdevs_operational": 2, 00:08:20.479 "base_bdevs_list": [ 00:08:20.479 { 00:08:20.479 "name": "BaseBdev1", 00:08:20.479 "uuid": "bc93c9a8-edb4-dc53-b008-4b2e13b075cb", 00:08:20.479 "is_configured": true, 00:08:20.479 "data_offset": 2048, 00:08:20.479 "data_size": 63488 00:08:20.479 }, 00:08:20.479 { 00:08:20.479 "name": "BaseBdev2", 00:08:20.479 "uuid": "41ad53be-7b77-5b55-87f5-26ca8aef4efd", 00:08:20.479 "is_configured": true, 00:08:20.479 "data_offset": 2048, 00:08:20.479 "data_size": 63488 00:08:20.479 } 00:08:20.479 ] 00:08:20.479 }' 00:08:20.479 18:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:20.479 18:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.738 18:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:20.738 18:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:20.996 [2024-07-15 18:22:13.175663] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e1978a0ec0 00:08:21.930 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:22.187 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.445 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:22.445 "name": "raid_bdev1", 00:08:22.445 "uuid": "27c5849a-42d7-11ef-9ade-d5fc5159efa5", 00:08:22.445 "strip_size_kb": 64, 00:08:22.445 "state": "online", 00:08:22.445 "raid_level": "concat", 00:08:22.445 "superblock": true, 00:08:22.445 "num_base_bdevs": 2, 00:08:22.445 "num_base_bdevs_discovered": 2, 00:08:22.445 "num_base_bdevs_operational": 2, 00:08:22.445 "base_bdevs_list": [ 00:08:22.445 { 00:08:22.445 "name": "BaseBdev1", 00:08:22.445 "uuid": "bc93c9a8-edb4-dc53-b008-4b2e13b075cb", 00:08:22.445 "is_configured": true, 00:08:22.445 "data_offset": 2048, 00:08:22.445 "data_size": 63488 00:08:22.445 }, 00:08:22.445 { 00:08:22.445 "name": "BaseBdev2", 00:08:22.445 "uuid": "41ad53be-7b77-5b55-87f5-26ca8aef4efd", 00:08:22.445 "is_configured": true, 00:08:22.445 "data_offset": 2048, 00:08:22.445 "data_size": 63488 00:08:22.445 } 00:08:22.445 ] 00:08:22.445 }' 00:08:22.445 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:22.445 18:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.703 18:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:22.960 [2024-07-15 18:22:15.175065] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.960 [2024-07-15 18:22:15.175095] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.960 [2024-07-15 18:22:15.175426] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.960 [2024-07-15 18:22:15.175436] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.960 [2024-07-15 18:22:15.175442] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.960 [2024-07-15 18:22:15.175446] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e197834f00 name raid_bdev1, state offline 00:08:22.960 0 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50530 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 50530 ']' 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 50530 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50530 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50530' 00:08:22.960 killing process with pid 50530 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 50530 00:08:22.960 [2024-07-15 18:22:15.209552] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.960 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 50530 00:08:22.960 [2024-07-15 18:22:15.224222] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.219 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.O6ftjMA3nX 00:08:23.219 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:23.219 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:23.219 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:08:23.219 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:23.220 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:23.220 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:23.220 18:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:08:23.220 00:08:23.220 real 0m5.956s 00:08:23.220 user 0m9.072s 00:08:23.220 sys 0m1.117s 00:08:23.220 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.220 18:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.220 ************************************ 00:08:23.220 END TEST raid_read_error_test 00:08:23.220 ************************************ 00:08:23.220 18:22:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:23.220 18:22:15 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:23.220 18:22:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:23.220 18:22:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.220 18:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.220 ************************************ 00:08:23.220 START TEST raid_write_error_test 00:08:23.220 ************************************ 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.vJ2LHLnE2i 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50658 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50658 /var/tmp/spdk-raid.sock 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 50658 ']' 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:23.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.220 18:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.220 [2024-07-15 18:22:15.522978] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:23.220 [2024-07-15 18:22:15.523127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:23.788 EAL: TSC is not safe to use in SMP mode 00:08:23.788 EAL: TSC is not invariant 00:08:23.788 [2024-07-15 18:22:16.126861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.046 [2024-07-15 18:22:16.238408] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:24.046 [2024-07-15 18:22:16.240694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.046 [2024-07-15 18:22:16.241484] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.046 [2024-07-15 18:22:16.241500] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.304 18:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.304 18:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:24.304 18:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:24.304 18:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:24.561 BaseBdev1_malloc 00:08:24.561 18:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:24.818 true 00:08:24.818 18:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.075 [2024-07-15 18:22:17.422034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.075 [2024-07-15 18:22:17.422105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.076 [2024-07-15 18:22:17.422134] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a1af634780 00:08:25.076 [2024-07-15 18:22:17.422144] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.076 [2024-07-15 18:22:17.422851] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.076 [2024-07-15 18:22:17.422878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.076 BaseBdev1 00:08:25.076 18:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:25.076 18:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.334 BaseBdev2_malloc 00:08:25.334 18:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:25.590 true 00:08:25.591 18:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.847 [2024-07-15 18:22:18.206184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.847 [2024-07-15 18:22:18.206246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.847 [2024-07-15 18:22:18.206275] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a1af634c80 00:08:25.847 [2024-07-15 18:22:18.206284] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.847 [2024-07-15 18:22:18.207016] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.847 [2024-07-15 18:22:18.207046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.847 BaseBdev2 00:08:25.847 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:26.103 [2024-07-15 18:22:18.446244] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.103 [2024-07-15 18:22:18.446888] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.103 [2024-07-15 18:22:18.446957] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a1af634f00 00:08:26.103 [2024-07-15 18:22:18.446968] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:26.103 [2024-07-15 18:22:18.447003] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a1af6a0e20 00:08:26.103 [2024-07-15 18:22:18.447083] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a1af634f00 00:08:26.103 [2024-07-15 18:22:18.447088] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9a1af634f00 00:08:26.103 [2024-07-15 18:22:18.447118] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:26.103 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.668 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:26.668 "name": "raid_bdev1", 00:08:26.668 "uuid": "2b597ffc-42d7-11ef-9ade-d5fc5159efa5", 00:08:26.668 "strip_size_kb": 64, 00:08:26.668 "state": "online", 00:08:26.668 "raid_level": "concat", 00:08:26.668 "superblock": true, 00:08:26.668 "num_base_bdevs": 2, 00:08:26.668 "num_base_bdevs_discovered": 2, 00:08:26.668 "num_base_bdevs_operational": 2, 00:08:26.668 "base_bdevs_list": [ 00:08:26.668 { 00:08:26.668 "name": "BaseBdev1", 00:08:26.668 "uuid": "84926636-9cb9-b751-a8f1-51ca91387d84", 00:08:26.669 "is_configured": true, 00:08:26.669 "data_offset": 2048, 00:08:26.669 "data_size": 63488 00:08:26.669 }, 00:08:26.669 { 00:08:26.669 "name": "BaseBdev2", 00:08:26.669 "uuid": "d1a7e391-44b0-aa51-9068-f368e1ad7c90", 00:08:26.669 "is_configured": true, 00:08:26.669 "data_offset": 2048, 00:08:26.669 "data_size": 63488 00:08:26.669 } 00:08:26.669 ] 00:08:26.669 }' 00:08:26.669 18:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:26.669 18:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.976 18:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:26.976 18:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:26.976 [2024-07-15 18:22:19.222622] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a1af6a0ec0 00:08:27.919 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.177 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.436 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:28.436 "name": "raid_bdev1", 00:08:28.436 "uuid": "2b597ffc-42d7-11ef-9ade-d5fc5159efa5", 00:08:28.436 "strip_size_kb": 64, 00:08:28.436 "state": "online", 00:08:28.436 "raid_level": "concat", 00:08:28.436 "superblock": true, 00:08:28.436 "num_base_bdevs": 2, 00:08:28.436 "num_base_bdevs_discovered": 2, 00:08:28.436 "num_base_bdevs_operational": 2, 00:08:28.436 "base_bdevs_list": [ 00:08:28.436 { 00:08:28.436 "name": "BaseBdev1", 00:08:28.436 "uuid": "84926636-9cb9-b751-a8f1-51ca91387d84", 00:08:28.436 "is_configured": true, 00:08:28.436 "data_offset": 2048, 00:08:28.436 "data_size": 63488 00:08:28.436 }, 00:08:28.436 { 00:08:28.436 "name": "BaseBdev2", 00:08:28.436 "uuid": "d1a7e391-44b0-aa51-9068-f368e1ad7c90", 00:08:28.436 "is_configured": true, 00:08:28.436 "data_offset": 2048, 00:08:28.436 "data_size": 63488 00:08:28.436 } 00:08:28.436 ] 00:08:28.436 }' 00:08:28.436 18:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:28.436 18:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.004 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:29.004 [2024-07-15 18:22:21.350107] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.004 [2024-07-15 18:22:21.350139] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.004 [2024-07-15 18:22:21.350476] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.005 [2024-07-15 18:22:21.350486] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.005 [2024-07-15 18:22:21.350493] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.005 [2024-07-15 18:22:21.350497] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a1af634f00 name raid_bdev1, state offline 00:08:29.005 0 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50658 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 50658 ']' 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 50658 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50658 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:29.005 killing process with pid 50658 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50658' 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 50658 00:08:29.005 [2024-07-15 18:22:21.379321] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.005 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 50658 00:08:29.263 [2024-07-15 18:22:21.393604] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.vJ2LHLnE2i 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:08:29.264 00:08:29.264 real 0m6.113s 00:08:29.264 user 0m9.274s 00:08:29.264 sys 0m1.204s 00:08:29.264 ************************************ 00:08:29.264 END TEST raid_write_error_test 00:08:29.264 ************************************ 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.264 18:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.523 18:22:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:29.523 18:22:21 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:29.523 18:22:21 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:29.523 18:22:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:29.523 18:22:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.523 18:22:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.523 ************************************ 00:08:29.523 START TEST raid_state_function_test 00:08:29.523 ************************************ 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50784 00:08:29.523 Process raid pid: 50784 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50784' 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50784 /var/tmp/spdk-raid.sock 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:29.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 50784 ']' 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.523 18:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.523 [2024-07-15 18:22:21.677930] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:29.523 [2024-07-15 18:22:21.678105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:30.090 EAL: TSC is not safe to use in SMP mode 00:08:30.090 EAL: TSC is not invariant 00:08:30.090 [2024-07-15 18:22:22.300821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.090 [2024-07-15 18:22:22.419766] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:30.090 [2024-07-15 18:22:22.422245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.090 [2024-07-15 18:22:22.423220] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.090 [2024-07-15 18:22:22.423238] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.349 18:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.349 18:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:08:30.349 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:30.654 [2024-07-15 18:22:22.980832] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.654 [2024-07-15 18:22:22.980894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.654 [2024-07-15 18:22:22.980900] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.654 [2024-07-15 18:22:22.980909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:30.654 18:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.654 18:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.912 18:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:30.912 "name": "Existed_Raid", 00:08:30.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.912 "strip_size_kb": 0, 00:08:30.912 "state": "configuring", 00:08:30.912 "raid_level": "raid1", 00:08:30.912 "superblock": false, 00:08:30.912 "num_base_bdevs": 2, 00:08:30.913 "num_base_bdevs_discovered": 0, 00:08:30.913 "num_base_bdevs_operational": 2, 00:08:30.913 "base_bdevs_list": [ 00:08:30.913 { 00:08:30.913 "name": "BaseBdev1", 00:08:30.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.913 "is_configured": false, 00:08:30.913 "data_offset": 0, 00:08:30.913 "data_size": 0 00:08:30.913 }, 00:08:30.913 { 00:08:30.913 "name": "BaseBdev2", 00:08:30.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.913 "is_configured": false, 00:08:30.913 "data_offset": 0, 00:08:30.913 "data_size": 0 00:08:30.913 } 00:08:30.913 ] 00:08:30.913 }' 00:08:30.913 18:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:30.913 18:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.479 18:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:31.479 [2024-07-15 18:22:23.832981] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.479 [2024-07-15 18:22:23.833011] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x47169034500 name Existed_Raid, state configuring 00:08:31.479 18:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:32.047 [2024-07-15 18:22:24.141040] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.047 [2024-07-15 18:22:24.141092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.047 [2024-07-15 18:22:24.141098] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.047 [2024-07-15 18:22:24.141107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.047 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.047 [2024-07-15 18:22:24.414224] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.047 BaseBdev1 00:08:32.047 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:32.047 18:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:32.047 18:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:32.047 18:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:32.306 18:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:32.306 18:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:32.306 18:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:32.565 18:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.825 [ 00:08:32.825 { 00:08:32.825 "name": "BaseBdev1", 00:08:32.825 "aliases": [ 00:08:32.825 "2ee7f97d-42d7-11ef-9ade-d5fc5159efa5" 00:08:32.825 ], 00:08:32.825 "product_name": "Malloc disk", 00:08:32.825 "block_size": 512, 00:08:32.825 "num_blocks": 65536, 00:08:32.825 "uuid": "2ee7f97d-42d7-11ef-9ade-d5fc5159efa5", 00:08:32.825 "assigned_rate_limits": { 00:08:32.825 "rw_ios_per_sec": 0, 00:08:32.825 "rw_mbytes_per_sec": 0, 00:08:32.825 "r_mbytes_per_sec": 0, 00:08:32.825 "w_mbytes_per_sec": 0 00:08:32.825 }, 00:08:32.825 "claimed": true, 00:08:32.825 "claim_type": "exclusive_write", 00:08:32.825 "zoned": false, 00:08:32.825 "supported_io_types": { 00:08:32.825 "read": true, 00:08:32.825 "write": true, 00:08:32.825 "unmap": true, 00:08:32.825 "flush": true, 00:08:32.825 "reset": true, 00:08:32.825 "nvme_admin": false, 00:08:32.825 "nvme_io": false, 00:08:32.825 "nvme_io_md": false, 00:08:32.825 "write_zeroes": true, 00:08:32.825 "zcopy": true, 00:08:32.825 "get_zone_info": false, 00:08:32.825 "zone_management": false, 00:08:32.825 "zone_append": false, 00:08:32.825 "compare": false, 00:08:32.825 "compare_and_write": false, 00:08:32.825 "abort": true, 00:08:32.825 "seek_hole": false, 00:08:32.825 "seek_data": false, 00:08:32.825 "copy": true, 00:08:32.825 "nvme_iov_md": false 00:08:32.825 }, 00:08:32.825 "memory_domains": [ 00:08:32.825 { 00:08:32.825 "dma_device_id": "system", 00:08:32.825 "dma_device_type": 1 00:08:32.825 }, 00:08:32.825 { 00:08:32.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.825 "dma_device_type": 2 00:08:32.825 } 00:08:32.825 ], 00:08:32.825 "driver_specific": {} 00:08:32.825 } 00:08:32.825 ] 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.825 18:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.084 18:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:33.084 "name": "Existed_Raid", 00:08:33.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.084 "strip_size_kb": 0, 00:08:33.084 "state": "configuring", 00:08:33.084 "raid_level": "raid1", 00:08:33.084 "superblock": false, 00:08:33.084 "num_base_bdevs": 2, 00:08:33.084 "num_base_bdevs_discovered": 1, 00:08:33.084 "num_base_bdevs_operational": 2, 00:08:33.084 "base_bdevs_list": [ 00:08:33.084 { 00:08:33.084 "name": "BaseBdev1", 00:08:33.084 "uuid": "2ee7f97d-42d7-11ef-9ade-d5fc5159efa5", 00:08:33.084 "is_configured": true, 00:08:33.084 "data_offset": 0, 00:08:33.084 "data_size": 65536 00:08:33.084 }, 00:08:33.084 { 00:08:33.084 "name": "BaseBdev2", 00:08:33.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.084 "is_configured": false, 00:08:33.084 "data_offset": 0, 00:08:33.084 "data_size": 0 00:08:33.084 } 00:08:33.084 ] 00:08:33.084 }' 00:08:33.084 18:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:33.084 18:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.343 18:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:33.601 [2024-07-15 18:22:25.833354] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.601 [2024-07-15 18:22:25.833396] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x47169034500 name Existed_Raid, state configuring 00:08:33.601 18:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:33.860 [2024-07-15 18:22:26.121423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.860 [2024-07-15 18:22:26.122279] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.860 [2024-07-15 18:22:26.122320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:33.860 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.118 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:34.118 "name": "Existed_Raid", 00:08:34.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.118 "strip_size_kb": 0, 00:08:34.118 "state": "configuring", 00:08:34.118 "raid_level": "raid1", 00:08:34.118 "superblock": false, 00:08:34.118 "num_base_bdevs": 2, 00:08:34.118 "num_base_bdevs_discovered": 1, 00:08:34.118 "num_base_bdevs_operational": 2, 00:08:34.118 "base_bdevs_list": [ 00:08:34.118 { 00:08:34.118 "name": "BaseBdev1", 00:08:34.118 "uuid": "2ee7f97d-42d7-11ef-9ade-d5fc5159efa5", 00:08:34.118 "is_configured": true, 00:08:34.118 "data_offset": 0, 00:08:34.118 "data_size": 65536 00:08:34.118 }, 00:08:34.118 { 00:08:34.118 "name": "BaseBdev2", 00:08:34.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.118 "is_configured": false, 00:08:34.118 "data_offset": 0, 00:08:34.118 "data_size": 0 00:08:34.118 } 00:08:34.118 ] 00:08:34.118 }' 00:08:34.118 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:34.118 18:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.377 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.635 [2024-07-15 18:22:26.961706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.635 [2024-07-15 18:22:26.961741] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x47169034a00 00:08:34.635 [2024-07-15 18:22:26.961747] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:34.635 [2024-07-15 18:22:26.961769] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x47169097e20 00:08:34.635 [2024-07-15 18:22:26.961863] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x47169034a00 00:08:34.635 [2024-07-15 18:22:26.961868] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x47169034a00 00:08:34.635 [2024-07-15 18:22:26.961902] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.635 BaseBdev2 00:08:34.635 18:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:34.635 18:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:34.635 18:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:34.635 18:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:34.635 18:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:34.635 18:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:34.635 18:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:34.893 18:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.151 [ 00:08:35.151 { 00:08:35.151 "name": "BaseBdev2", 00:08:35.151 "aliases": [ 00:08:35.151 "306cd634-42d7-11ef-9ade-d5fc5159efa5" 00:08:35.151 ], 00:08:35.151 "product_name": "Malloc disk", 00:08:35.151 "block_size": 512, 00:08:35.151 "num_blocks": 65536, 00:08:35.151 "uuid": "306cd634-42d7-11ef-9ade-d5fc5159efa5", 00:08:35.151 "assigned_rate_limits": { 00:08:35.151 "rw_ios_per_sec": 0, 00:08:35.151 "rw_mbytes_per_sec": 0, 00:08:35.151 "r_mbytes_per_sec": 0, 00:08:35.151 "w_mbytes_per_sec": 0 00:08:35.151 }, 00:08:35.151 "claimed": true, 00:08:35.151 "claim_type": "exclusive_write", 00:08:35.151 "zoned": false, 00:08:35.151 "supported_io_types": { 00:08:35.151 "read": true, 00:08:35.151 "write": true, 00:08:35.151 "unmap": true, 00:08:35.151 "flush": true, 00:08:35.151 "reset": true, 00:08:35.151 "nvme_admin": false, 00:08:35.151 "nvme_io": false, 00:08:35.151 "nvme_io_md": false, 00:08:35.151 "write_zeroes": true, 00:08:35.151 "zcopy": true, 00:08:35.151 "get_zone_info": false, 00:08:35.151 "zone_management": false, 00:08:35.151 "zone_append": false, 00:08:35.151 "compare": false, 00:08:35.151 "compare_and_write": false, 00:08:35.151 "abort": true, 00:08:35.151 "seek_hole": false, 00:08:35.151 "seek_data": false, 00:08:35.151 "copy": true, 00:08:35.151 "nvme_iov_md": false 00:08:35.151 }, 00:08:35.151 "memory_domains": [ 00:08:35.151 { 00:08:35.151 "dma_device_id": "system", 00:08:35.151 "dma_device_type": 1 00:08:35.151 }, 00:08:35.151 { 00:08:35.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.151 "dma_device_type": 2 00:08:35.151 } 00:08:35.151 ], 00:08:35.151 "driver_specific": {} 00:08:35.151 } 00:08:35.151 ] 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.151 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.410 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:35.410 "name": "Existed_Raid", 00:08:35.410 "uuid": "306cdd70-42d7-11ef-9ade-d5fc5159efa5", 00:08:35.410 "strip_size_kb": 0, 00:08:35.410 "state": "online", 00:08:35.410 "raid_level": "raid1", 00:08:35.410 "superblock": false, 00:08:35.410 "num_base_bdevs": 2, 00:08:35.410 "num_base_bdevs_discovered": 2, 00:08:35.410 "num_base_bdevs_operational": 2, 00:08:35.410 "base_bdevs_list": [ 00:08:35.410 { 00:08:35.410 "name": "BaseBdev1", 00:08:35.410 "uuid": "2ee7f97d-42d7-11ef-9ade-d5fc5159efa5", 00:08:35.410 "is_configured": true, 00:08:35.410 "data_offset": 0, 00:08:35.410 "data_size": 65536 00:08:35.410 }, 00:08:35.410 { 00:08:35.410 "name": "BaseBdev2", 00:08:35.410 "uuid": "306cd634-42d7-11ef-9ade-d5fc5159efa5", 00:08:35.410 "is_configured": true, 00:08:35.410 "data_offset": 0, 00:08:35.410 "data_size": 65536 00:08:35.410 } 00:08:35.410 ] 00:08:35.410 }' 00:08:35.410 18:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:35.410 18:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.694 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.694 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:35.694 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:35.694 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:35.694 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:35.694 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:35.694 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:35.694 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:35.952 [2024-07-15 18:22:28.269834] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.952 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:35.952 "name": "Existed_Raid", 00:08:35.952 "aliases": [ 00:08:35.952 "306cdd70-42d7-11ef-9ade-d5fc5159efa5" 00:08:35.952 ], 00:08:35.952 "product_name": "Raid Volume", 00:08:35.952 "block_size": 512, 00:08:35.952 "num_blocks": 65536, 00:08:35.952 "uuid": "306cdd70-42d7-11ef-9ade-d5fc5159efa5", 00:08:35.952 "assigned_rate_limits": { 00:08:35.952 "rw_ios_per_sec": 0, 00:08:35.952 "rw_mbytes_per_sec": 0, 00:08:35.952 "r_mbytes_per_sec": 0, 00:08:35.952 "w_mbytes_per_sec": 0 00:08:35.952 }, 00:08:35.952 "claimed": false, 00:08:35.953 "zoned": false, 00:08:35.953 "supported_io_types": { 00:08:35.953 "read": true, 00:08:35.953 "write": true, 00:08:35.953 "unmap": false, 00:08:35.953 "flush": false, 00:08:35.953 "reset": true, 00:08:35.953 "nvme_admin": false, 00:08:35.953 "nvme_io": false, 00:08:35.953 "nvme_io_md": false, 00:08:35.953 "write_zeroes": true, 00:08:35.953 "zcopy": false, 00:08:35.953 "get_zone_info": false, 00:08:35.953 "zone_management": false, 00:08:35.953 "zone_append": false, 00:08:35.953 "compare": false, 00:08:35.953 "compare_and_write": false, 00:08:35.953 "abort": false, 00:08:35.953 "seek_hole": false, 00:08:35.953 "seek_data": false, 00:08:35.953 "copy": false, 00:08:35.953 "nvme_iov_md": false 00:08:35.953 }, 00:08:35.953 "memory_domains": [ 00:08:35.953 { 00:08:35.953 "dma_device_id": "system", 00:08:35.953 "dma_device_type": 1 00:08:35.953 }, 00:08:35.953 { 00:08:35.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.953 "dma_device_type": 2 00:08:35.953 }, 00:08:35.953 { 00:08:35.953 "dma_device_id": "system", 00:08:35.953 "dma_device_type": 1 00:08:35.953 }, 00:08:35.953 { 00:08:35.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.953 "dma_device_type": 2 00:08:35.953 } 00:08:35.953 ], 00:08:35.953 "driver_specific": { 00:08:35.953 "raid": { 00:08:35.953 "uuid": "306cdd70-42d7-11ef-9ade-d5fc5159efa5", 00:08:35.953 "strip_size_kb": 0, 00:08:35.953 "state": "online", 00:08:35.953 "raid_level": "raid1", 00:08:35.953 "superblock": false, 00:08:35.953 "num_base_bdevs": 2, 00:08:35.953 "num_base_bdevs_discovered": 2, 00:08:35.953 "num_base_bdevs_operational": 2, 00:08:35.953 "base_bdevs_list": [ 00:08:35.953 { 00:08:35.953 "name": "BaseBdev1", 00:08:35.953 "uuid": "2ee7f97d-42d7-11ef-9ade-d5fc5159efa5", 00:08:35.953 "is_configured": true, 00:08:35.953 "data_offset": 0, 00:08:35.953 "data_size": 65536 00:08:35.953 }, 00:08:35.953 { 00:08:35.953 "name": "BaseBdev2", 00:08:35.953 "uuid": "306cd634-42d7-11ef-9ade-d5fc5159efa5", 00:08:35.953 "is_configured": true, 00:08:35.953 "data_offset": 0, 00:08:35.953 "data_size": 65536 00:08:35.953 } 00:08:35.953 ] 00:08:35.953 } 00:08:35.953 } 00:08:35.953 }' 00:08:35.953 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.953 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:35.953 BaseBdev2' 00:08:35.953 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:35.953 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:35.953 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:36.211 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:36.212 "name": "BaseBdev1", 00:08:36.212 "aliases": [ 00:08:36.212 "2ee7f97d-42d7-11ef-9ade-d5fc5159efa5" 00:08:36.212 ], 00:08:36.212 "product_name": "Malloc disk", 00:08:36.212 "block_size": 512, 00:08:36.212 "num_blocks": 65536, 00:08:36.212 "uuid": "2ee7f97d-42d7-11ef-9ade-d5fc5159efa5", 00:08:36.212 "assigned_rate_limits": { 00:08:36.212 "rw_ios_per_sec": 0, 00:08:36.212 "rw_mbytes_per_sec": 0, 00:08:36.212 "r_mbytes_per_sec": 0, 00:08:36.212 "w_mbytes_per_sec": 0 00:08:36.212 }, 00:08:36.212 "claimed": true, 00:08:36.212 "claim_type": "exclusive_write", 00:08:36.212 "zoned": false, 00:08:36.212 "supported_io_types": { 00:08:36.212 "read": true, 00:08:36.212 "write": true, 00:08:36.212 "unmap": true, 00:08:36.212 "flush": true, 00:08:36.212 "reset": true, 00:08:36.212 "nvme_admin": false, 00:08:36.212 "nvme_io": false, 00:08:36.212 "nvme_io_md": false, 00:08:36.212 "write_zeroes": true, 00:08:36.212 "zcopy": true, 00:08:36.212 "get_zone_info": false, 00:08:36.212 "zone_management": false, 00:08:36.212 "zone_append": false, 00:08:36.212 "compare": false, 00:08:36.212 "compare_and_write": false, 00:08:36.212 "abort": true, 00:08:36.212 "seek_hole": false, 00:08:36.212 "seek_data": false, 00:08:36.212 "copy": true, 00:08:36.212 "nvme_iov_md": false 00:08:36.212 }, 00:08:36.212 "memory_domains": [ 00:08:36.212 { 00:08:36.212 "dma_device_id": "system", 00:08:36.212 "dma_device_type": 1 00:08:36.212 }, 00:08:36.212 { 00:08:36.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.212 "dma_device_type": 2 00:08:36.212 } 00:08:36.212 ], 00:08:36.212 "driver_specific": {} 00:08:36.212 }' 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:36.212 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:36.470 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:36.470 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:36.470 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:36.470 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:36.728 "name": "BaseBdev2", 00:08:36.728 "aliases": [ 00:08:36.728 "306cd634-42d7-11ef-9ade-d5fc5159efa5" 00:08:36.728 ], 00:08:36.728 "product_name": "Malloc disk", 00:08:36.728 "block_size": 512, 00:08:36.728 "num_blocks": 65536, 00:08:36.728 "uuid": "306cd634-42d7-11ef-9ade-d5fc5159efa5", 00:08:36.728 "assigned_rate_limits": { 00:08:36.728 "rw_ios_per_sec": 0, 00:08:36.728 "rw_mbytes_per_sec": 0, 00:08:36.728 "r_mbytes_per_sec": 0, 00:08:36.728 "w_mbytes_per_sec": 0 00:08:36.728 }, 00:08:36.728 "claimed": true, 00:08:36.728 "claim_type": "exclusive_write", 00:08:36.728 "zoned": false, 00:08:36.728 "supported_io_types": { 00:08:36.728 "read": true, 00:08:36.728 "write": true, 00:08:36.728 "unmap": true, 00:08:36.728 "flush": true, 00:08:36.728 "reset": true, 00:08:36.728 "nvme_admin": false, 00:08:36.728 "nvme_io": false, 00:08:36.728 "nvme_io_md": false, 00:08:36.728 "write_zeroes": true, 00:08:36.728 "zcopy": true, 00:08:36.728 "get_zone_info": false, 00:08:36.728 "zone_management": false, 00:08:36.728 "zone_append": false, 00:08:36.728 "compare": false, 00:08:36.728 "compare_and_write": false, 00:08:36.728 "abort": true, 00:08:36.728 "seek_hole": false, 00:08:36.728 "seek_data": false, 00:08:36.728 "copy": true, 00:08:36.728 "nvme_iov_md": false 00:08:36.728 }, 00:08:36.728 "memory_domains": [ 00:08:36.728 { 00:08:36.728 "dma_device_id": "system", 00:08:36.728 "dma_device_type": 1 00:08:36.728 }, 00:08:36.728 { 00:08:36.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.728 "dma_device_type": 2 00:08:36.728 } 00:08:36.728 ], 00:08:36.728 "driver_specific": {} 00:08:36.728 }' 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:36.728 18:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:36.986 [2024-07-15 18:22:29.153956] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.986 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.244 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:37.244 "name": "Existed_Raid", 00:08:37.244 "uuid": "306cdd70-42d7-11ef-9ade-d5fc5159efa5", 00:08:37.244 "strip_size_kb": 0, 00:08:37.244 "state": "online", 00:08:37.244 "raid_level": "raid1", 00:08:37.244 "superblock": false, 00:08:37.244 "num_base_bdevs": 2, 00:08:37.244 "num_base_bdevs_discovered": 1, 00:08:37.244 "num_base_bdevs_operational": 1, 00:08:37.245 "base_bdevs_list": [ 00:08:37.245 { 00:08:37.245 "name": null, 00:08:37.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.245 "is_configured": false, 00:08:37.245 "data_offset": 0, 00:08:37.245 "data_size": 65536 00:08:37.245 }, 00:08:37.245 { 00:08:37.245 "name": "BaseBdev2", 00:08:37.245 "uuid": "306cd634-42d7-11ef-9ade-d5fc5159efa5", 00:08:37.245 "is_configured": true, 00:08:37.245 "data_offset": 0, 00:08:37.245 "data_size": 65536 00:08:37.245 } 00:08:37.245 ] 00:08:37.245 }' 00:08:37.245 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:37.245 18:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.502 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:37.502 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:37.502 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.502 18:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:37.761 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:37.761 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.761 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:38.019 [2024-07-15 18:22:30.323972] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.019 [2024-07-15 18:22:30.324018] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.019 [2024-07-15 18:22:30.333083] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.019 [2024-07-15 18:22:30.333118] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.019 [2024-07-15 18:22:30.333126] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x47169034a00 name Existed_Raid, state offline 00:08:38.020 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:38.020 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:38.020 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:38.020 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50784 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 50784 ']' 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 50784 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 50784 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:38.278 killing process with pid 50784 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50784' 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 50784 00:08:38.278 [2024-07-15 18:22:30.618753] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.278 [2024-07-15 18:22:30.618791] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.278 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 50784 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:38.537 00:08:38.537 real 0m9.171s 00:08:38.537 user 0m15.939s 00:08:38.537 sys 0m1.590s 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.537 ************************************ 00:08:38.537 END TEST raid_state_function_test 00:08:38.537 ************************************ 00:08:38.537 18:22:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:38.537 18:22:30 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:38.537 18:22:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:38.537 18:22:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.537 18:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.537 ************************************ 00:08:38.537 START TEST raid_state_function_test_sb 00:08:38.537 ************************************ 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:38.537 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=51059 00:08:38.538 Process raid pid: 51059 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51059' 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 51059 /var/tmp/spdk-raid.sock 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 51059 ']' 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.538 18:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.538 [2024-07-15 18:22:30.898526] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:38.538 [2024-07-15 18:22:30.898789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:39.473 EAL: TSC is not safe to use in SMP mode 00:08:39.473 EAL: TSC is not invariant 00:08:39.473 [2024-07-15 18:22:31.499630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.473 [2024-07-15 18:22:31.619160] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:39.473 [2024-07-15 18:22:31.621885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.473 [2024-07-15 18:22:31.622969] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.473 [2024-07-15 18:22:31.622992] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.732 18:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.732 18:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:08:39.732 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:39.991 [2024-07-15 18:22:32.260682] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.991 [2024-07-15 18:22:32.260737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.991 [2024-07-15 18:22:32.260746] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.991 [2024-07-15 18:22:32.260760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.991 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.255 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:40.255 "name": "Existed_Raid", 00:08:40.256 "uuid": "33956a95-42d7-11ef-9ade-d5fc5159efa5", 00:08:40.256 "strip_size_kb": 0, 00:08:40.256 "state": "configuring", 00:08:40.256 "raid_level": "raid1", 00:08:40.256 "superblock": true, 00:08:40.256 "num_base_bdevs": 2, 00:08:40.256 "num_base_bdevs_discovered": 0, 00:08:40.256 "num_base_bdevs_operational": 2, 00:08:40.256 "base_bdevs_list": [ 00:08:40.256 { 00:08:40.256 "name": "BaseBdev1", 00:08:40.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.256 "is_configured": false, 00:08:40.256 "data_offset": 0, 00:08:40.256 "data_size": 0 00:08:40.256 }, 00:08:40.256 { 00:08:40.256 "name": "BaseBdev2", 00:08:40.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.256 "is_configured": false, 00:08:40.256 "data_offset": 0, 00:08:40.256 "data_size": 0 00:08:40.256 } 00:08:40.256 ] 00:08:40.256 }' 00:08:40.256 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:40.256 18:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.524 18:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:40.782 [2024-07-15 18:22:33.084654] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.782 [2024-07-15 18:22:33.084685] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19cad6434500 name Existed_Raid, state configuring 00:08:40.782 18:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:41.041 [2024-07-15 18:22:33.368664] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.041 [2024-07-15 18:22:33.368719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.041 [2024-07-15 18:22:33.368728] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.041 [2024-07-15 18:22:33.368743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.041 18:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:41.300 [2024-07-15 18:22:33.637751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.300 BaseBdev1 00:08:41.300 18:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:41.300 18:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:41.300 18:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:41.300 18:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:41.300 18:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:41.300 18:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:41.300 18:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:41.558 18:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.817 [ 00:08:41.817 { 00:08:41.817 "name": "BaseBdev1", 00:08:41.817 "aliases": [ 00:08:41.817 "34676009-42d7-11ef-9ade-d5fc5159efa5" 00:08:41.817 ], 00:08:41.817 "product_name": "Malloc disk", 00:08:41.817 "block_size": 512, 00:08:41.817 "num_blocks": 65536, 00:08:41.817 "uuid": "34676009-42d7-11ef-9ade-d5fc5159efa5", 00:08:41.817 "assigned_rate_limits": { 00:08:41.817 "rw_ios_per_sec": 0, 00:08:41.817 "rw_mbytes_per_sec": 0, 00:08:41.817 "r_mbytes_per_sec": 0, 00:08:41.817 "w_mbytes_per_sec": 0 00:08:41.817 }, 00:08:41.817 "claimed": true, 00:08:41.817 "claim_type": "exclusive_write", 00:08:41.817 "zoned": false, 00:08:41.817 "supported_io_types": { 00:08:41.817 "read": true, 00:08:41.817 "write": true, 00:08:41.817 "unmap": true, 00:08:41.817 "flush": true, 00:08:41.817 "reset": true, 00:08:41.817 "nvme_admin": false, 00:08:41.817 "nvme_io": false, 00:08:41.817 "nvme_io_md": false, 00:08:41.817 "write_zeroes": true, 00:08:41.817 "zcopy": true, 00:08:41.817 "get_zone_info": false, 00:08:41.817 "zone_management": false, 00:08:41.817 "zone_append": false, 00:08:41.817 "compare": false, 00:08:41.817 "compare_and_write": false, 00:08:41.817 "abort": true, 00:08:41.817 "seek_hole": false, 00:08:41.817 "seek_data": false, 00:08:41.817 "copy": true, 00:08:41.817 "nvme_iov_md": false 00:08:41.817 }, 00:08:41.817 "memory_domains": [ 00:08:41.817 { 00:08:41.817 "dma_device_id": "system", 00:08:41.817 "dma_device_type": 1 00:08:41.817 }, 00:08:41.817 { 00:08:41.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.817 "dma_device_type": 2 00:08:41.817 } 00:08:41.817 ], 00:08:41.817 "driver_specific": {} 00:08:41.817 } 00:08:41.817 ] 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.817 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.076 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:42.076 "name": "Existed_Raid", 00:08:42.076 "uuid": "343e7b25-42d7-11ef-9ade-d5fc5159efa5", 00:08:42.076 "strip_size_kb": 0, 00:08:42.076 "state": "configuring", 00:08:42.076 "raid_level": "raid1", 00:08:42.076 "superblock": true, 00:08:42.076 "num_base_bdevs": 2, 00:08:42.076 "num_base_bdevs_discovered": 1, 00:08:42.076 "num_base_bdevs_operational": 2, 00:08:42.076 "base_bdevs_list": [ 00:08:42.076 { 00:08:42.076 "name": "BaseBdev1", 00:08:42.076 "uuid": "34676009-42d7-11ef-9ade-d5fc5159efa5", 00:08:42.076 "is_configured": true, 00:08:42.076 "data_offset": 2048, 00:08:42.076 "data_size": 63488 00:08:42.076 }, 00:08:42.076 { 00:08:42.076 "name": "BaseBdev2", 00:08:42.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.076 "is_configured": false, 00:08:42.076 "data_offset": 0, 00:08:42.076 "data_size": 0 00:08:42.076 } 00:08:42.076 ] 00:08:42.076 }' 00:08:42.076 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:42.076 18:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.334 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:42.901 [2024-07-15 18:22:34.980642] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.901 [2024-07-15 18:22:34.980677] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19cad6434500 name Existed_Raid, state configuring 00:08:42.901 18:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:42.901 [2024-07-15 18:22:35.256672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.901 [2024-07-15 18:22:35.257493] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.901 [2024-07-15 18:22:35.257534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.901 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.159 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:43.159 "name": "Existed_Raid", 00:08:43.159 "uuid": "355e9148-42d7-11ef-9ade-d5fc5159efa5", 00:08:43.159 "strip_size_kb": 0, 00:08:43.159 "state": "configuring", 00:08:43.159 "raid_level": "raid1", 00:08:43.159 "superblock": true, 00:08:43.159 "num_base_bdevs": 2, 00:08:43.159 "num_base_bdevs_discovered": 1, 00:08:43.159 "num_base_bdevs_operational": 2, 00:08:43.159 "base_bdevs_list": [ 00:08:43.159 { 00:08:43.159 "name": "BaseBdev1", 00:08:43.159 "uuid": "34676009-42d7-11ef-9ade-d5fc5159efa5", 00:08:43.159 "is_configured": true, 00:08:43.159 "data_offset": 2048, 00:08:43.159 "data_size": 63488 00:08:43.159 }, 00:08:43.159 { 00:08:43.159 "name": "BaseBdev2", 00:08:43.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.159 "is_configured": false, 00:08:43.159 "data_offset": 0, 00:08:43.159 "data_size": 0 00:08:43.159 } 00:08:43.159 ] 00:08:43.159 }' 00:08:43.159 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:43.159 18:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.725 18:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.981 [2024-07-15 18:22:36.148797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.981 [2024-07-15 18:22:36.148862] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x19cad6434a00 00:08:43.981 [2024-07-15 18:22:36.148868] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.981 [2024-07-15 18:22:36.148890] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19cad6497e20 00:08:43.981 [2024-07-15 18:22:36.148940] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19cad6434a00 00:08:43.981 [2024-07-15 18:22:36.148944] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19cad6434a00 00:08:43.981 [2024-07-15 18:22:36.148966] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.981 BaseBdev2 00:08:43.981 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:43.981 18:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:43.981 18:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:43.981 18:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:43.981 18:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:43.981 18:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:43.981 18:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:44.237 18:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:44.494 [ 00:08:44.494 { 00:08:44.494 "name": "BaseBdev2", 00:08:44.494 "aliases": [ 00:08:44.495 "35e6ad19-42d7-11ef-9ade-d5fc5159efa5" 00:08:44.495 ], 00:08:44.495 "product_name": "Malloc disk", 00:08:44.495 "block_size": 512, 00:08:44.495 "num_blocks": 65536, 00:08:44.495 "uuid": "35e6ad19-42d7-11ef-9ade-d5fc5159efa5", 00:08:44.495 "assigned_rate_limits": { 00:08:44.495 "rw_ios_per_sec": 0, 00:08:44.495 "rw_mbytes_per_sec": 0, 00:08:44.495 "r_mbytes_per_sec": 0, 00:08:44.495 "w_mbytes_per_sec": 0 00:08:44.495 }, 00:08:44.495 "claimed": true, 00:08:44.495 "claim_type": "exclusive_write", 00:08:44.495 "zoned": false, 00:08:44.495 "supported_io_types": { 00:08:44.495 "read": true, 00:08:44.495 "write": true, 00:08:44.495 "unmap": true, 00:08:44.495 "flush": true, 00:08:44.495 "reset": true, 00:08:44.495 "nvme_admin": false, 00:08:44.495 "nvme_io": false, 00:08:44.495 "nvme_io_md": false, 00:08:44.495 "write_zeroes": true, 00:08:44.495 "zcopy": true, 00:08:44.495 "get_zone_info": false, 00:08:44.495 "zone_management": false, 00:08:44.495 "zone_append": false, 00:08:44.495 "compare": false, 00:08:44.495 "compare_and_write": false, 00:08:44.495 "abort": true, 00:08:44.495 "seek_hole": false, 00:08:44.495 "seek_data": false, 00:08:44.495 "copy": true, 00:08:44.495 "nvme_iov_md": false 00:08:44.495 }, 00:08:44.495 "memory_domains": [ 00:08:44.495 { 00:08:44.495 "dma_device_id": "system", 00:08:44.495 "dma_device_type": 1 00:08:44.495 }, 00:08:44.495 { 00:08:44.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.495 "dma_device_type": 2 00:08:44.495 } 00:08:44.495 ], 00:08:44.495 "driver_specific": {} 00:08:44.495 } 00:08:44.495 ] 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.495 18:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.753 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:44.753 "name": "Existed_Raid", 00:08:44.753 "uuid": "355e9148-42d7-11ef-9ade-d5fc5159efa5", 00:08:44.753 "strip_size_kb": 0, 00:08:44.753 "state": "online", 00:08:44.753 "raid_level": "raid1", 00:08:44.753 "superblock": true, 00:08:44.753 "num_base_bdevs": 2, 00:08:44.753 "num_base_bdevs_discovered": 2, 00:08:44.753 "num_base_bdevs_operational": 2, 00:08:44.753 "base_bdevs_list": [ 00:08:44.753 { 00:08:44.753 "name": "BaseBdev1", 00:08:44.753 "uuid": "34676009-42d7-11ef-9ade-d5fc5159efa5", 00:08:44.753 "is_configured": true, 00:08:44.753 "data_offset": 2048, 00:08:44.753 "data_size": 63488 00:08:44.753 }, 00:08:44.753 { 00:08:44.753 "name": "BaseBdev2", 00:08:44.753 "uuid": "35e6ad19-42d7-11ef-9ade-d5fc5159efa5", 00:08:44.753 "is_configured": true, 00:08:44.753 "data_offset": 2048, 00:08:44.753 "data_size": 63488 00:08:44.753 } 00:08:44.753 ] 00:08:44.753 }' 00:08:44.753 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:44.753 18:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.010 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.010 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:45.010 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:45.010 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:45.010 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:45.010 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:45.010 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:45.010 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:45.268 [2024-07-15 18:22:37.568690] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.268 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:45.268 "name": "Existed_Raid", 00:08:45.268 "aliases": [ 00:08:45.268 "355e9148-42d7-11ef-9ade-d5fc5159efa5" 00:08:45.268 ], 00:08:45.268 "product_name": "Raid Volume", 00:08:45.268 "block_size": 512, 00:08:45.268 "num_blocks": 63488, 00:08:45.268 "uuid": "355e9148-42d7-11ef-9ade-d5fc5159efa5", 00:08:45.268 "assigned_rate_limits": { 00:08:45.268 "rw_ios_per_sec": 0, 00:08:45.268 "rw_mbytes_per_sec": 0, 00:08:45.268 "r_mbytes_per_sec": 0, 00:08:45.268 "w_mbytes_per_sec": 0 00:08:45.268 }, 00:08:45.268 "claimed": false, 00:08:45.268 "zoned": false, 00:08:45.268 "supported_io_types": { 00:08:45.268 "read": true, 00:08:45.268 "write": true, 00:08:45.268 "unmap": false, 00:08:45.268 "flush": false, 00:08:45.268 "reset": true, 00:08:45.268 "nvme_admin": false, 00:08:45.268 "nvme_io": false, 00:08:45.268 "nvme_io_md": false, 00:08:45.268 "write_zeroes": true, 00:08:45.268 "zcopy": false, 00:08:45.268 "get_zone_info": false, 00:08:45.268 "zone_management": false, 00:08:45.268 "zone_append": false, 00:08:45.268 "compare": false, 00:08:45.268 "compare_and_write": false, 00:08:45.268 "abort": false, 00:08:45.268 "seek_hole": false, 00:08:45.268 "seek_data": false, 00:08:45.268 "copy": false, 00:08:45.268 "nvme_iov_md": false 00:08:45.268 }, 00:08:45.268 "memory_domains": [ 00:08:45.268 { 00:08:45.268 "dma_device_id": "system", 00:08:45.268 "dma_device_type": 1 00:08:45.268 }, 00:08:45.268 { 00:08:45.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.268 "dma_device_type": 2 00:08:45.268 }, 00:08:45.268 { 00:08:45.268 "dma_device_id": "system", 00:08:45.268 "dma_device_type": 1 00:08:45.268 }, 00:08:45.268 { 00:08:45.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.268 "dma_device_type": 2 00:08:45.268 } 00:08:45.268 ], 00:08:45.268 "driver_specific": { 00:08:45.268 "raid": { 00:08:45.268 "uuid": "355e9148-42d7-11ef-9ade-d5fc5159efa5", 00:08:45.268 "strip_size_kb": 0, 00:08:45.268 "state": "online", 00:08:45.268 "raid_level": "raid1", 00:08:45.268 "superblock": true, 00:08:45.268 "num_base_bdevs": 2, 00:08:45.268 "num_base_bdevs_discovered": 2, 00:08:45.268 "num_base_bdevs_operational": 2, 00:08:45.268 "base_bdevs_list": [ 00:08:45.268 { 00:08:45.268 "name": "BaseBdev1", 00:08:45.268 "uuid": "34676009-42d7-11ef-9ade-d5fc5159efa5", 00:08:45.268 "is_configured": true, 00:08:45.268 "data_offset": 2048, 00:08:45.268 "data_size": 63488 00:08:45.268 }, 00:08:45.268 { 00:08:45.268 "name": "BaseBdev2", 00:08:45.268 "uuid": "35e6ad19-42d7-11ef-9ade-d5fc5159efa5", 00:08:45.268 "is_configured": true, 00:08:45.268 "data_offset": 2048, 00:08:45.268 "data_size": 63488 00:08:45.268 } 00:08:45.268 ] 00:08:45.268 } 00:08:45.268 } 00:08:45.268 }' 00:08:45.268 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.268 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:45.268 BaseBdev2' 00:08:45.268 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:45.268 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:45.268 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:45.526 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:45.526 "name": "BaseBdev1", 00:08:45.526 "aliases": [ 00:08:45.526 "34676009-42d7-11ef-9ade-d5fc5159efa5" 00:08:45.526 ], 00:08:45.526 "product_name": "Malloc disk", 00:08:45.526 "block_size": 512, 00:08:45.526 "num_blocks": 65536, 00:08:45.526 "uuid": "34676009-42d7-11ef-9ade-d5fc5159efa5", 00:08:45.526 "assigned_rate_limits": { 00:08:45.526 "rw_ios_per_sec": 0, 00:08:45.526 "rw_mbytes_per_sec": 0, 00:08:45.526 "r_mbytes_per_sec": 0, 00:08:45.526 "w_mbytes_per_sec": 0 00:08:45.526 }, 00:08:45.526 "claimed": true, 00:08:45.526 "claim_type": "exclusive_write", 00:08:45.526 "zoned": false, 00:08:45.526 "supported_io_types": { 00:08:45.526 "read": true, 00:08:45.526 "write": true, 00:08:45.526 "unmap": true, 00:08:45.526 "flush": true, 00:08:45.526 "reset": true, 00:08:45.526 "nvme_admin": false, 00:08:45.526 "nvme_io": false, 00:08:45.526 "nvme_io_md": false, 00:08:45.526 "write_zeroes": true, 00:08:45.526 "zcopy": true, 00:08:45.526 "get_zone_info": false, 00:08:45.526 "zone_management": false, 00:08:45.526 "zone_append": false, 00:08:45.526 "compare": false, 00:08:45.526 "compare_and_write": false, 00:08:45.526 "abort": true, 00:08:45.526 "seek_hole": false, 00:08:45.526 "seek_data": false, 00:08:45.526 "copy": true, 00:08:45.526 "nvme_iov_md": false 00:08:45.526 }, 00:08:45.526 "memory_domains": [ 00:08:45.526 { 00:08:45.526 "dma_device_id": "system", 00:08:45.526 "dma_device_type": 1 00:08:45.526 }, 00:08:45.526 { 00:08:45.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.526 "dma_device_type": 2 00:08:45.526 } 00:08:45.526 ], 00:08:45.526 "driver_specific": {} 00:08:45.526 }' 00:08:45.526 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:45.526 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:45.526 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:45.526 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:45.526 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:45.784 18:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:46.043 "name": "BaseBdev2", 00:08:46.043 "aliases": [ 00:08:46.043 "35e6ad19-42d7-11ef-9ade-d5fc5159efa5" 00:08:46.043 ], 00:08:46.043 "product_name": "Malloc disk", 00:08:46.043 "block_size": 512, 00:08:46.043 "num_blocks": 65536, 00:08:46.043 "uuid": "35e6ad19-42d7-11ef-9ade-d5fc5159efa5", 00:08:46.043 "assigned_rate_limits": { 00:08:46.043 "rw_ios_per_sec": 0, 00:08:46.043 "rw_mbytes_per_sec": 0, 00:08:46.043 "r_mbytes_per_sec": 0, 00:08:46.043 "w_mbytes_per_sec": 0 00:08:46.043 }, 00:08:46.043 "claimed": true, 00:08:46.043 "claim_type": "exclusive_write", 00:08:46.043 "zoned": false, 00:08:46.043 "supported_io_types": { 00:08:46.043 "read": true, 00:08:46.043 "write": true, 00:08:46.043 "unmap": true, 00:08:46.043 "flush": true, 00:08:46.043 "reset": true, 00:08:46.043 "nvme_admin": false, 00:08:46.043 "nvme_io": false, 00:08:46.043 "nvme_io_md": false, 00:08:46.043 "write_zeroes": true, 00:08:46.043 "zcopy": true, 00:08:46.043 "get_zone_info": false, 00:08:46.043 "zone_management": false, 00:08:46.043 "zone_append": false, 00:08:46.043 "compare": false, 00:08:46.043 "compare_and_write": false, 00:08:46.043 "abort": true, 00:08:46.043 "seek_hole": false, 00:08:46.043 "seek_data": false, 00:08:46.043 "copy": true, 00:08:46.043 "nvme_iov_md": false 00:08:46.043 }, 00:08:46.043 "memory_domains": [ 00:08:46.043 { 00:08:46.043 "dma_device_id": "system", 00:08:46.043 "dma_device_type": 1 00:08:46.043 }, 00:08:46.043 { 00:08:46.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.043 "dma_device_type": 2 00:08:46.043 } 00:08:46.043 ], 00:08:46.043 "driver_specific": {} 00:08:46.043 }' 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:46.043 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:46.302 [2024-07-15 18:22:38.524663] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.302 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.561 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:46.561 "name": "Existed_Raid", 00:08:46.561 "uuid": "355e9148-42d7-11ef-9ade-d5fc5159efa5", 00:08:46.561 "strip_size_kb": 0, 00:08:46.561 "state": "online", 00:08:46.561 "raid_level": "raid1", 00:08:46.561 "superblock": true, 00:08:46.561 "num_base_bdevs": 2, 00:08:46.561 "num_base_bdevs_discovered": 1, 00:08:46.561 "num_base_bdevs_operational": 1, 00:08:46.561 "base_bdevs_list": [ 00:08:46.561 { 00:08:46.561 "name": null, 00:08:46.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.561 "is_configured": false, 00:08:46.561 "data_offset": 2048, 00:08:46.561 "data_size": 63488 00:08:46.561 }, 00:08:46.561 { 00:08:46.561 "name": "BaseBdev2", 00:08:46.561 "uuid": "35e6ad19-42d7-11ef-9ade-d5fc5159efa5", 00:08:46.561 "is_configured": true, 00:08:46.561 "data_offset": 2048, 00:08:46.561 "data_size": 63488 00:08:46.561 } 00:08:46.561 ] 00:08:46.561 }' 00:08:46.561 18:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:46.561 18:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.820 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:46.820 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:46.820 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.820 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:47.388 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:47.388 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:47.388 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:47.646 [2024-07-15 18:22:39.774461] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.646 [2024-07-15 18:22:39.774504] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.646 [2024-07-15 18:22:39.783086] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.646 [2024-07-15 18:22:39.783105] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.646 [2024-07-15 18:22:39.783110] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19cad6434a00 name Existed_Raid, state offline 00:08:47.646 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:47.646 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:47.646 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.646 18:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 51059 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 51059 ']' 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 51059 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 51059 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:47.905 killing process with pid 51059 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51059' 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 51059 00:08:47.905 [2024-07-15 18:22:40.091472] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.905 [2024-07-15 18:22:40.091505] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.905 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 51059 00:08:48.165 18:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:48.165 ************************************ 00:08:48.165 END TEST raid_state_function_test_sb 00:08:48.165 ************************************ 00:08:48.165 00:08:48.165 real 0m9.431s 00:08:48.165 user 0m16.492s 00:08:48.165 sys 0m1.562s 00:08:48.165 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.165 18:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.165 18:22:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:48.165 18:22:40 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:48.165 18:22:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:48.165 18:22:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.165 18:22:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.165 ************************************ 00:08:48.165 START TEST raid_superblock_test 00:08:48.165 ************************************ 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51333 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51333 /var/tmp/spdk-raid.sock 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 51333 ']' 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.165 18:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.165 [2024-07-15 18:22:40.369443] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:48.165 [2024-07-15 18:22:40.369641] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:48.732 EAL: TSC is not safe to use in SMP mode 00:08:48.732 EAL: TSC is not invariant 00:08:48.732 [2024-07-15 18:22:40.966062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.732 [2024-07-15 18:22:41.074336] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:48.732 [2024-07-15 18:22:41.076432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.732 [2024-07-15 18:22:41.077225] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.732 [2024-07-15 18:22:41.077240] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.319 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:49.578 malloc1 00:08:49.578 18:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:49.836 [2024-07-15 18:22:42.005529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:49.836 [2024-07-15 18:22:42.005589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.836 [2024-07-15 18:22:42.005602] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x33fe61834780 00:08:49.836 [2024-07-15 18:22:42.005611] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.836 [2024-07-15 18:22:42.006502] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.836 [2024-07-15 18:22:42.006530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:49.836 pt1 00:08:49.836 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:49.836 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:49.836 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:49.836 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:49.836 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:49.837 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.837 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.837 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.837 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:50.096 malloc2 00:08:50.096 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.354 [2024-07-15 18:22:42.517503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.354 [2024-07-15 18:22:42.517567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.355 [2024-07-15 18:22:42.517581] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x33fe61834c80 00:08:50.355 [2024-07-15 18:22:42.517589] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.355 [2024-07-15 18:22:42.518249] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.355 [2024-07-15 18:22:42.518281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.355 pt2 00:08:50.355 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:50.355 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:50.355 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:08:50.613 [2024-07-15 18:22:42.793498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.613 [2024-07-15 18:22:42.794083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.613 [2024-07-15 18:22:42.794144] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33fe61834f00 00:08:50.613 [2024-07-15 18:22:42.794151] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.613 [2024-07-15 18:22:42.794187] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33fe61897e20 00:08:50.613 [2024-07-15 18:22:42.794268] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33fe61834f00 00:08:50.613 [2024-07-15 18:22:42.794273] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x33fe61834f00 00:08:50.613 [2024-07-15 18:22:42.794300] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.613 18:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.872 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:50.872 "name": "raid_bdev1", 00:08:50.872 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:50.872 "strip_size_kb": 0, 00:08:50.872 "state": "online", 00:08:50.872 "raid_level": "raid1", 00:08:50.872 "superblock": true, 00:08:50.872 "num_base_bdevs": 2, 00:08:50.872 "num_base_bdevs_discovered": 2, 00:08:50.872 "num_base_bdevs_operational": 2, 00:08:50.872 "base_bdevs_list": [ 00:08:50.872 { 00:08:50.872 "name": "pt1", 00:08:50.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.872 "is_configured": true, 00:08:50.872 "data_offset": 2048, 00:08:50.872 "data_size": 63488 00:08:50.872 }, 00:08:50.872 { 00:08:50.872 "name": "pt2", 00:08:50.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.872 "is_configured": true, 00:08:50.872 "data_offset": 2048, 00:08:50.872 "data_size": 63488 00:08:50.872 } 00:08:50.872 ] 00:08:50.872 }' 00:08:50.872 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:50.872 18:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.128 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:51.128 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:51.128 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:51.128 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:51.128 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:51.128 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:51.128 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:51.128 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:51.385 [2024-07-15 18:22:43.649486] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.385 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:51.385 "name": "raid_bdev1", 00:08:51.385 "aliases": [ 00:08:51.385 "39dc989a-42d7-11ef-9ade-d5fc5159efa5" 00:08:51.385 ], 00:08:51.385 "product_name": "Raid Volume", 00:08:51.385 "block_size": 512, 00:08:51.385 "num_blocks": 63488, 00:08:51.385 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:51.385 "assigned_rate_limits": { 00:08:51.385 "rw_ios_per_sec": 0, 00:08:51.385 "rw_mbytes_per_sec": 0, 00:08:51.385 "r_mbytes_per_sec": 0, 00:08:51.385 "w_mbytes_per_sec": 0 00:08:51.385 }, 00:08:51.385 "claimed": false, 00:08:51.385 "zoned": false, 00:08:51.385 "supported_io_types": { 00:08:51.385 "read": true, 00:08:51.385 "write": true, 00:08:51.385 "unmap": false, 00:08:51.385 "flush": false, 00:08:51.385 "reset": true, 00:08:51.385 "nvme_admin": false, 00:08:51.385 "nvme_io": false, 00:08:51.385 "nvme_io_md": false, 00:08:51.385 "write_zeroes": true, 00:08:51.385 "zcopy": false, 00:08:51.385 "get_zone_info": false, 00:08:51.385 "zone_management": false, 00:08:51.385 "zone_append": false, 00:08:51.385 "compare": false, 00:08:51.385 "compare_and_write": false, 00:08:51.385 "abort": false, 00:08:51.385 "seek_hole": false, 00:08:51.385 "seek_data": false, 00:08:51.385 "copy": false, 00:08:51.385 "nvme_iov_md": false 00:08:51.385 }, 00:08:51.385 "memory_domains": [ 00:08:51.385 { 00:08:51.385 "dma_device_id": "system", 00:08:51.385 "dma_device_type": 1 00:08:51.385 }, 00:08:51.385 { 00:08:51.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.385 "dma_device_type": 2 00:08:51.385 }, 00:08:51.385 { 00:08:51.385 "dma_device_id": "system", 00:08:51.385 "dma_device_type": 1 00:08:51.385 }, 00:08:51.385 { 00:08:51.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.385 "dma_device_type": 2 00:08:51.385 } 00:08:51.385 ], 00:08:51.385 "driver_specific": { 00:08:51.385 "raid": { 00:08:51.385 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:51.385 "strip_size_kb": 0, 00:08:51.385 "state": "online", 00:08:51.385 "raid_level": "raid1", 00:08:51.385 "superblock": true, 00:08:51.385 "num_base_bdevs": 2, 00:08:51.385 "num_base_bdevs_discovered": 2, 00:08:51.385 "num_base_bdevs_operational": 2, 00:08:51.385 "base_bdevs_list": [ 00:08:51.385 { 00:08:51.385 "name": "pt1", 00:08:51.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.385 "is_configured": true, 00:08:51.385 "data_offset": 2048, 00:08:51.385 "data_size": 63488 00:08:51.385 }, 00:08:51.385 { 00:08:51.385 "name": "pt2", 00:08:51.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.385 "is_configured": true, 00:08:51.385 "data_offset": 2048, 00:08:51.385 "data_size": 63488 00:08:51.385 } 00:08:51.385 ] 00:08:51.385 } 00:08:51.385 } 00:08:51.385 }' 00:08:51.385 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.385 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:51.385 pt2' 00:08:51.385 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:51.385 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:51.385 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:51.644 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:51.644 "name": "pt1", 00:08:51.644 "aliases": [ 00:08:51.644 "00000000-0000-0000-0000-000000000001" 00:08:51.644 ], 00:08:51.644 "product_name": "passthru", 00:08:51.644 "block_size": 512, 00:08:51.644 "num_blocks": 65536, 00:08:51.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.644 "assigned_rate_limits": { 00:08:51.644 "rw_ios_per_sec": 0, 00:08:51.644 "rw_mbytes_per_sec": 0, 00:08:51.644 "r_mbytes_per_sec": 0, 00:08:51.644 "w_mbytes_per_sec": 0 00:08:51.644 }, 00:08:51.644 "claimed": true, 00:08:51.644 "claim_type": "exclusive_write", 00:08:51.644 "zoned": false, 00:08:51.644 "supported_io_types": { 00:08:51.644 "read": true, 00:08:51.644 "write": true, 00:08:51.644 "unmap": true, 00:08:51.644 "flush": true, 00:08:51.644 "reset": true, 00:08:51.644 "nvme_admin": false, 00:08:51.644 "nvme_io": false, 00:08:51.644 "nvme_io_md": false, 00:08:51.644 "write_zeroes": true, 00:08:51.644 "zcopy": true, 00:08:51.644 "get_zone_info": false, 00:08:51.644 "zone_management": false, 00:08:51.644 "zone_append": false, 00:08:51.644 "compare": false, 00:08:51.644 "compare_and_write": false, 00:08:51.644 "abort": true, 00:08:51.644 "seek_hole": false, 00:08:51.644 "seek_data": false, 00:08:51.644 "copy": true, 00:08:51.644 "nvme_iov_md": false 00:08:51.644 }, 00:08:51.644 "memory_domains": [ 00:08:51.644 { 00:08:51.644 "dma_device_id": "system", 00:08:51.644 "dma_device_type": 1 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.644 "dma_device_type": 2 00:08:51.644 } 00:08:51.644 ], 00:08:51.644 "driver_specific": { 00:08:51.644 "passthru": { 00:08:51.644 "name": "pt1", 00:08:51.644 "base_bdev_name": "malloc1" 00:08:51.644 } 00:08:51.644 } 00:08:51.644 }' 00:08:51.644 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:51.644 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:51.644 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:51.644 18:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:51.644 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:51.644 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:51.644 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:51.644 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:51.644 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:51.644 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:51.902 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:51.902 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:51.902 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:51.903 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:51.903 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:52.160 "name": "pt2", 00:08:52.160 "aliases": [ 00:08:52.160 "00000000-0000-0000-0000-000000000002" 00:08:52.160 ], 00:08:52.160 "product_name": "passthru", 00:08:52.160 "block_size": 512, 00:08:52.160 "num_blocks": 65536, 00:08:52.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.160 "assigned_rate_limits": { 00:08:52.160 "rw_ios_per_sec": 0, 00:08:52.160 "rw_mbytes_per_sec": 0, 00:08:52.160 "r_mbytes_per_sec": 0, 00:08:52.160 "w_mbytes_per_sec": 0 00:08:52.160 }, 00:08:52.160 "claimed": true, 00:08:52.160 "claim_type": "exclusive_write", 00:08:52.160 "zoned": false, 00:08:52.160 "supported_io_types": { 00:08:52.160 "read": true, 00:08:52.160 "write": true, 00:08:52.160 "unmap": true, 00:08:52.160 "flush": true, 00:08:52.160 "reset": true, 00:08:52.160 "nvme_admin": false, 00:08:52.160 "nvme_io": false, 00:08:52.160 "nvme_io_md": false, 00:08:52.160 "write_zeroes": true, 00:08:52.160 "zcopy": true, 00:08:52.160 "get_zone_info": false, 00:08:52.160 "zone_management": false, 00:08:52.160 "zone_append": false, 00:08:52.160 "compare": false, 00:08:52.160 "compare_and_write": false, 00:08:52.160 "abort": true, 00:08:52.160 "seek_hole": false, 00:08:52.160 "seek_data": false, 00:08:52.160 "copy": true, 00:08:52.160 "nvme_iov_md": false 00:08:52.160 }, 00:08:52.160 "memory_domains": [ 00:08:52.160 { 00:08:52.160 "dma_device_id": "system", 00:08:52.160 "dma_device_type": 1 00:08:52.160 }, 00:08:52.160 { 00:08:52.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.160 "dma_device_type": 2 00:08:52.160 } 00:08:52.160 ], 00:08:52.160 "driver_specific": { 00:08:52.160 "passthru": { 00:08:52.160 "name": "pt2", 00:08:52.160 "base_bdev_name": "malloc2" 00:08:52.160 } 00:08:52.160 } 00:08:52.160 }' 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:52.160 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:52.419 [2024-07-15 18:22:44.661475] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.419 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=39dc989a-42d7-11ef-9ade-d5fc5159efa5 00:08:52.419 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 39dc989a-42d7-11ef-9ade-d5fc5159efa5 ']' 00:08:52.419 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:52.676 [2024-07-15 18:22:44.905401] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.676 [2024-07-15 18:22:44.905430] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.676 [2024-07-15 18:22:44.905455] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.676 [2024-07-15 18:22:44.905469] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.676 [2024-07-15 18:22:44.905474] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33fe61834f00 name raid_bdev1, state offline 00:08:52.676 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.676 18:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:52.934 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:52.934 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:52.934 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.934 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:53.193 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:53.193 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:53.452 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:53.452 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:53.787 18:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:54.046 [2024-07-15 18:22:46.265370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:54.046 [2024-07-15 18:22:46.265971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:54.046 [2024-07-15 18:22:46.265999] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:54.046 [2024-07-15 18:22:46.266038] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:54.046 [2024-07-15 18:22:46.266049] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.046 [2024-07-15 18:22:46.266054] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33fe61834c80 name raid_bdev1, state configuring 00:08:54.046 request: 00:08:54.046 { 00:08:54.046 "name": "raid_bdev1", 00:08:54.046 "raid_level": "raid1", 00:08:54.046 "base_bdevs": [ 00:08:54.046 "malloc1", 00:08:54.046 "malloc2" 00:08:54.046 ], 00:08:54.046 "superblock": false, 00:08:54.046 "method": "bdev_raid_create", 00:08:54.046 "req_id": 1 00:08:54.046 } 00:08:54.046 Got JSON-RPC error response 00:08:54.046 response: 00:08:54.046 { 00:08:54.046 "code": -17, 00:08:54.046 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:54.046 } 00:08:54.046 18:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:08:54.046 18:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:54.046 18:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:54.046 18:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:54.046 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.046 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:54.304 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:54.304 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:54.304 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.564 [2024-07-15 18:22:46.749344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.564 [2024-07-15 18:22:46.749406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.564 [2024-07-15 18:22:46.749418] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x33fe61834780 00:08:54.564 [2024-07-15 18:22:46.749427] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.564 [2024-07-15 18:22:46.750092] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.564 [2024-07-15 18:22:46.750119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.564 [2024-07-15 18:22:46.750145] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:54.564 [2024-07-15 18:22:46.750157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.564 pt1 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.564 18:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.823 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:54.823 "name": "raid_bdev1", 00:08:54.823 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:54.823 "strip_size_kb": 0, 00:08:54.823 "state": "configuring", 00:08:54.823 "raid_level": "raid1", 00:08:54.823 "superblock": true, 00:08:54.823 "num_base_bdevs": 2, 00:08:54.823 "num_base_bdevs_discovered": 1, 00:08:54.823 "num_base_bdevs_operational": 2, 00:08:54.823 "base_bdevs_list": [ 00:08:54.823 { 00:08:54.823 "name": "pt1", 00:08:54.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.823 "is_configured": true, 00:08:54.823 "data_offset": 2048, 00:08:54.823 "data_size": 63488 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "name": null, 00:08:54.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.823 "is_configured": false, 00:08:54.823 "data_offset": 2048, 00:08:54.823 "data_size": 63488 00:08:54.823 } 00:08:54.823 ] 00:08:54.823 }' 00:08:54.823 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:54.823 18:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.083 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:55.083 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:55.083 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:55.083 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.342 [2024-07-15 18:22:47.637333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.342 [2024-07-15 18:22:47.637392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.342 [2024-07-15 18:22:47.637405] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x33fe61834f00 00:08:55.342 [2024-07-15 18:22:47.637413] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.342 [2024-07-15 18:22:47.637536] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.342 [2024-07-15 18:22:47.637548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.342 [2024-07-15 18:22:47.637572] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.342 [2024-07-15 18:22:47.637581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.342 [2024-07-15 18:22:47.637609] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33fe61835180 00:08:55.342 [2024-07-15 18:22:47.637614] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.342 [2024-07-15 18:22:47.637634] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33fe61897e20 00:08:55.342 [2024-07-15 18:22:47.637693] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33fe61835180 00:08:55.342 [2024-07-15 18:22:47.637698] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x33fe61835180 00:08:55.342 [2024-07-15 18:22:47.637721] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.342 pt2 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.342 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.600 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:55.600 "name": "raid_bdev1", 00:08:55.600 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:55.600 "strip_size_kb": 0, 00:08:55.600 "state": "online", 00:08:55.600 "raid_level": "raid1", 00:08:55.600 "superblock": true, 00:08:55.600 "num_base_bdevs": 2, 00:08:55.600 "num_base_bdevs_discovered": 2, 00:08:55.600 "num_base_bdevs_operational": 2, 00:08:55.600 "base_bdevs_list": [ 00:08:55.600 { 00:08:55.600 "name": "pt1", 00:08:55.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.600 "is_configured": true, 00:08:55.600 "data_offset": 2048, 00:08:55.600 "data_size": 63488 00:08:55.600 }, 00:08:55.600 { 00:08:55.600 "name": "pt2", 00:08:55.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.600 "is_configured": true, 00:08:55.600 "data_offset": 2048, 00:08:55.600 "data_size": 63488 00:08:55.600 } 00:08:55.600 ] 00:08:55.600 }' 00:08:55.600 18:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:55.600 18:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.166 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:56.166 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:56.166 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:56.166 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:56.166 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:56.166 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:56.166 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:56.166 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:56.423 [2024-07-15 18:22:48.573364] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.423 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:56.423 "name": "raid_bdev1", 00:08:56.423 "aliases": [ 00:08:56.423 "39dc989a-42d7-11ef-9ade-d5fc5159efa5" 00:08:56.423 ], 00:08:56.423 "product_name": "Raid Volume", 00:08:56.423 "block_size": 512, 00:08:56.423 "num_blocks": 63488, 00:08:56.423 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:56.423 "assigned_rate_limits": { 00:08:56.423 "rw_ios_per_sec": 0, 00:08:56.423 "rw_mbytes_per_sec": 0, 00:08:56.423 "r_mbytes_per_sec": 0, 00:08:56.423 "w_mbytes_per_sec": 0 00:08:56.423 }, 00:08:56.423 "claimed": false, 00:08:56.423 "zoned": false, 00:08:56.423 "supported_io_types": { 00:08:56.423 "read": true, 00:08:56.423 "write": true, 00:08:56.423 "unmap": false, 00:08:56.423 "flush": false, 00:08:56.423 "reset": true, 00:08:56.423 "nvme_admin": false, 00:08:56.423 "nvme_io": false, 00:08:56.423 "nvme_io_md": false, 00:08:56.423 "write_zeroes": true, 00:08:56.423 "zcopy": false, 00:08:56.423 "get_zone_info": false, 00:08:56.423 "zone_management": false, 00:08:56.423 "zone_append": false, 00:08:56.423 "compare": false, 00:08:56.423 "compare_and_write": false, 00:08:56.423 "abort": false, 00:08:56.423 "seek_hole": false, 00:08:56.423 "seek_data": false, 00:08:56.423 "copy": false, 00:08:56.423 "nvme_iov_md": false 00:08:56.423 }, 00:08:56.423 "memory_domains": [ 00:08:56.423 { 00:08:56.423 "dma_device_id": "system", 00:08:56.423 "dma_device_type": 1 00:08:56.423 }, 00:08:56.423 { 00:08:56.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.423 "dma_device_type": 2 00:08:56.423 }, 00:08:56.423 { 00:08:56.423 "dma_device_id": "system", 00:08:56.423 "dma_device_type": 1 00:08:56.423 }, 00:08:56.423 { 00:08:56.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.423 "dma_device_type": 2 00:08:56.423 } 00:08:56.423 ], 00:08:56.423 "driver_specific": { 00:08:56.423 "raid": { 00:08:56.423 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:56.423 "strip_size_kb": 0, 00:08:56.423 "state": "online", 00:08:56.423 "raid_level": "raid1", 00:08:56.423 "superblock": true, 00:08:56.423 "num_base_bdevs": 2, 00:08:56.423 "num_base_bdevs_discovered": 2, 00:08:56.423 "num_base_bdevs_operational": 2, 00:08:56.423 "base_bdevs_list": [ 00:08:56.423 { 00:08:56.423 "name": "pt1", 00:08:56.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.423 "is_configured": true, 00:08:56.423 "data_offset": 2048, 00:08:56.423 "data_size": 63488 00:08:56.423 }, 00:08:56.423 { 00:08:56.423 "name": "pt2", 00:08:56.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.423 "is_configured": true, 00:08:56.423 "data_offset": 2048, 00:08:56.423 "data_size": 63488 00:08:56.423 } 00:08:56.423 ] 00:08:56.423 } 00:08:56.423 } 00:08:56.423 }' 00:08:56.423 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.423 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:56.423 pt2' 00:08:56.423 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:56.423 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:56.423 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:56.683 "name": "pt1", 00:08:56.683 "aliases": [ 00:08:56.683 "00000000-0000-0000-0000-000000000001" 00:08:56.683 ], 00:08:56.683 "product_name": "passthru", 00:08:56.683 "block_size": 512, 00:08:56.683 "num_blocks": 65536, 00:08:56.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.683 "assigned_rate_limits": { 00:08:56.683 "rw_ios_per_sec": 0, 00:08:56.683 "rw_mbytes_per_sec": 0, 00:08:56.683 "r_mbytes_per_sec": 0, 00:08:56.683 "w_mbytes_per_sec": 0 00:08:56.683 }, 00:08:56.683 "claimed": true, 00:08:56.683 "claim_type": "exclusive_write", 00:08:56.683 "zoned": false, 00:08:56.683 "supported_io_types": { 00:08:56.683 "read": true, 00:08:56.683 "write": true, 00:08:56.683 "unmap": true, 00:08:56.683 "flush": true, 00:08:56.683 "reset": true, 00:08:56.683 "nvme_admin": false, 00:08:56.683 "nvme_io": false, 00:08:56.683 "nvme_io_md": false, 00:08:56.683 "write_zeroes": true, 00:08:56.683 "zcopy": true, 00:08:56.683 "get_zone_info": false, 00:08:56.683 "zone_management": false, 00:08:56.683 "zone_append": false, 00:08:56.683 "compare": false, 00:08:56.683 "compare_and_write": false, 00:08:56.683 "abort": true, 00:08:56.683 "seek_hole": false, 00:08:56.683 "seek_data": false, 00:08:56.683 "copy": true, 00:08:56.683 "nvme_iov_md": false 00:08:56.683 }, 00:08:56.683 "memory_domains": [ 00:08:56.683 { 00:08:56.683 "dma_device_id": "system", 00:08:56.683 "dma_device_type": 1 00:08:56.683 }, 00:08:56.683 { 00:08:56.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.683 "dma_device_type": 2 00:08:56.683 } 00:08:56.683 ], 00:08:56.683 "driver_specific": { 00:08:56.683 "passthru": { 00:08:56.683 "name": "pt1", 00:08:56.683 "base_bdev_name": "malloc1" 00:08:56.683 } 00:08:56.683 } 00:08:56.683 }' 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:56.683 18:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:56.942 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:56.942 "name": "pt2", 00:08:56.942 "aliases": [ 00:08:56.942 "00000000-0000-0000-0000-000000000002" 00:08:56.942 ], 00:08:56.942 "product_name": "passthru", 00:08:56.942 "block_size": 512, 00:08:56.942 "num_blocks": 65536, 00:08:56.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.942 "assigned_rate_limits": { 00:08:56.942 "rw_ios_per_sec": 0, 00:08:56.942 "rw_mbytes_per_sec": 0, 00:08:56.942 "r_mbytes_per_sec": 0, 00:08:56.942 "w_mbytes_per_sec": 0 00:08:56.942 }, 00:08:56.942 "claimed": true, 00:08:56.942 "claim_type": "exclusive_write", 00:08:56.942 "zoned": false, 00:08:56.942 "supported_io_types": { 00:08:56.942 "read": true, 00:08:56.942 "write": true, 00:08:56.942 "unmap": true, 00:08:56.942 "flush": true, 00:08:56.942 "reset": true, 00:08:56.942 "nvme_admin": false, 00:08:56.942 "nvme_io": false, 00:08:56.942 "nvme_io_md": false, 00:08:56.942 "write_zeroes": true, 00:08:56.942 "zcopy": true, 00:08:56.942 "get_zone_info": false, 00:08:56.942 "zone_management": false, 00:08:56.942 "zone_append": false, 00:08:56.942 "compare": false, 00:08:56.942 "compare_and_write": false, 00:08:56.942 "abort": true, 00:08:56.942 "seek_hole": false, 00:08:56.942 "seek_data": false, 00:08:56.942 "copy": true, 00:08:56.942 "nvme_iov_md": false 00:08:56.942 }, 00:08:56.942 "memory_domains": [ 00:08:56.942 { 00:08:56.942 "dma_device_id": "system", 00:08:56.942 "dma_device_type": 1 00:08:56.942 }, 00:08:56.943 { 00:08:56.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.943 "dma_device_type": 2 00:08:56.943 } 00:08:56.943 ], 00:08:56.943 "driver_specific": { 00:08:56.943 "passthru": { 00:08:56.943 "name": "pt2", 00:08:56.943 "base_bdev_name": "malloc2" 00:08:56.943 } 00:08:56.943 } 00:08:56.943 }' 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:56.943 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:57.201 [2024-07-15 18:22:49.413344] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.201 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 39dc989a-42d7-11ef-9ade-d5fc5159efa5 '!=' 39dc989a-42d7-11ef-9ade-d5fc5159efa5 ']' 00:08:57.201 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:08:57.201 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:57.201 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:57.201 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:57.458 [2024-07-15 18:22:49.657314] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.458 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.716 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:57.716 "name": "raid_bdev1", 00:08:57.716 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:57.716 "strip_size_kb": 0, 00:08:57.716 "state": "online", 00:08:57.716 "raid_level": "raid1", 00:08:57.716 "superblock": true, 00:08:57.716 "num_base_bdevs": 2, 00:08:57.716 "num_base_bdevs_discovered": 1, 00:08:57.716 "num_base_bdevs_operational": 1, 00:08:57.716 "base_bdevs_list": [ 00:08:57.716 { 00:08:57.716 "name": null, 00:08:57.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.716 "is_configured": false, 00:08:57.716 "data_offset": 2048, 00:08:57.716 "data_size": 63488 00:08:57.716 }, 00:08:57.716 { 00:08:57.716 "name": "pt2", 00:08:57.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.716 "is_configured": true, 00:08:57.716 "data_offset": 2048, 00:08:57.716 "data_size": 63488 00:08:57.716 } 00:08:57.716 ] 00:08:57.716 }' 00:08:57.716 18:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:57.716 18:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.974 18:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:58.234 [2024-07-15 18:22:50.477282] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.234 [2024-07-15 18:22:50.477311] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.234 [2024-07-15 18:22:50.477352] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.234 [2024-07-15 18:22:50.477365] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.234 [2024-07-15 18:22:50.477370] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33fe61835180 name raid_bdev1, state offline 00:08:58.234 18:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.234 18:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:08:58.541 18:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:08:58.541 18:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:08:58.541 18:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:08:58.541 18:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:58.541 18:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:58.799 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:08:58.799 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:58.799 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:08:58.799 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:08:58.799 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:08:58.799 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.058 [2024-07-15 18:22:51.289272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.058 [2024-07-15 18:22:51.289336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.058 [2024-07-15 18:22:51.289349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x33fe61834f00 00:08:59.058 [2024-07-15 18:22:51.289358] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.058 [2024-07-15 18:22:51.290033] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.058 [2024-07-15 18:22:51.290059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.058 [2024-07-15 18:22:51.290086] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.058 [2024-07-15 18:22:51.290098] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.058 [2024-07-15 18:22:51.290125] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33fe61835180 00:08:59.058 [2024-07-15 18:22:51.290130] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:59.058 [2024-07-15 18:22:51.290150] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33fe61897e20 00:08:59.058 [2024-07-15 18:22:51.290201] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33fe61835180 00:08:59.058 [2024-07-15 18:22:51.290206] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x33fe61835180 00:08:59.058 [2024-07-15 18:22:51.290238] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.058 pt2 00:08:59.058 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:59.058 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.059 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.317 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:59.317 "name": "raid_bdev1", 00:08:59.317 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:08:59.317 "strip_size_kb": 0, 00:08:59.317 "state": "online", 00:08:59.317 "raid_level": "raid1", 00:08:59.317 "superblock": true, 00:08:59.317 "num_base_bdevs": 2, 00:08:59.317 "num_base_bdevs_discovered": 1, 00:08:59.317 "num_base_bdevs_operational": 1, 00:08:59.317 "base_bdevs_list": [ 00:08:59.317 { 00:08:59.317 "name": null, 00:08:59.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.317 "is_configured": false, 00:08:59.317 "data_offset": 2048, 00:08:59.317 "data_size": 63488 00:08:59.317 }, 00:08:59.317 { 00:08:59.317 "name": "pt2", 00:08:59.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.317 "is_configured": true, 00:08:59.317 "data_offset": 2048, 00:08:59.317 "data_size": 63488 00:08:59.317 } 00:08:59.317 ] 00:08:59.317 }' 00:08:59.317 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:59.317 18:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.576 18:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:59.834 [2024-07-15 18:22:52.129256] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.834 [2024-07-15 18:22:52.129284] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.834 [2024-07-15 18:22:52.129308] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.834 [2024-07-15 18:22:52.129320] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.834 [2024-07-15 18:22:52.129324] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33fe61835180 name raid_bdev1, state offline 00:08:59.834 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.834 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:09:00.093 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:09:00.093 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:09:00.093 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:09:00.093 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.351 [2024-07-15 18:22:52.677260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.351 [2024-07-15 18:22:52.677323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.352 [2024-07-15 18:22:52.677336] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x33fe61834c80 00:09:00.352 [2024-07-15 18:22:52.677344] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.352 [2024-07-15 18:22:52.678011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.352 [2024-07-15 18:22:52.678041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.352 [2024-07-15 18:22:52.678067] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:00.352 [2024-07-15 18:22:52.678079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:00.352 [2024-07-15 18:22:52.678111] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:00.352 [2024-07-15 18:22:52.678115] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.352 [2024-07-15 18:22:52.678120] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33fe61834780 name raid_bdev1, state configuring 00:09:00.352 [2024-07-15 18:22:52.678128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.352 [2024-07-15 18:22:52.678143] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33fe61834780 00:09:00.352 [2024-07-15 18:22:52.678147] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:00.352 [2024-07-15 18:22:52.678167] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33fe61897e20 00:09:00.352 [2024-07-15 18:22:52.678218] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33fe61834780 00:09:00.352 [2024-07-15 18:22:52.678222] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x33fe61834780 00:09:00.352 [2024-07-15 18:22:52.678243] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.352 pt1 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.352 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.611 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:00.611 "name": "raid_bdev1", 00:09:00.611 "uuid": "39dc989a-42d7-11ef-9ade-d5fc5159efa5", 00:09:00.611 "strip_size_kb": 0, 00:09:00.611 "state": "online", 00:09:00.611 "raid_level": "raid1", 00:09:00.611 "superblock": true, 00:09:00.611 "num_base_bdevs": 2, 00:09:00.611 "num_base_bdevs_discovered": 1, 00:09:00.611 "num_base_bdevs_operational": 1, 00:09:00.611 "base_bdevs_list": [ 00:09:00.611 { 00:09:00.611 "name": null, 00:09:00.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.611 "is_configured": false, 00:09:00.611 "data_offset": 2048, 00:09:00.611 "data_size": 63488 00:09:00.611 }, 00:09:00.611 { 00:09:00.611 "name": "pt2", 00:09:00.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.611 "is_configured": true, 00:09:00.611 "data_offset": 2048, 00:09:00.611 "data_size": 63488 00:09:00.611 } 00:09:00.611 ] 00:09:00.611 }' 00:09:00.611 18:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:00.611 18:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.178 18:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:01.178 18:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:01.437 18:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:09:01.437 18:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:01.437 18:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:09:01.697 [2024-07-15 18:22:53.821310] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 39dc989a-42d7-11ef-9ade-d5fc5159efa5 '!=' 39dc989a-42d7-11ef-9ade-d5fc5159efa5 ']' 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51333 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 51333 ']' 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 51333 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 51333 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:01.697 killing process with pid 51333 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51333' 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 51333 00:09:01.697 [2024-07-15 18:22:53.852766] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.697 [2024-07-15 18:22:53.852794] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.697 [2024-07-15 18:22:53.852814] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.697 [2024-07-15 18:22:53.852818] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33fe61834780 name raid_bdev1, state offline 00:09:01.697 18:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 51333 00:09:01.697 [2024-07-15 18:22:53.867533] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.957 18:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:09:01.957 00:09:01.957 real 0m13.735s 00:09:01.957 user 0m24.392s 00:09:01.957 sys 0m2.255s 00:09:01.957 18:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.957 18:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.957 ************************************ 00:09:01.957 END TEST raid_superblock_test 00:09:01.957 ************************************ 00:09:01.957 18:22:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:01.957 18:22:54 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:01.957 18:22:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:01.957 18:22:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.957 18:22:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.957 ************************************ 00:09:01.957 START TEST raid_read_error_test 00:09:01.957 ************************************ 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.f0uWUN713Q 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51726 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51726 /var/tmp/spdk-raid.sock 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 51726 ']' 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.957 18:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.957 [2024-07-15 18:22:54.162440] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:09:01.957 [2024-07-15 18:22:54.162755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:02.524 EAL: TSC is not safe to use in SMP mode 00:09:02.524 EAL: TSC is not invariant 00:09:02.524 [2024-07-15 18:22:54.802254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.782 [2024-07-15 18:22:54.914924] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:02.782 [2024-07-15 18:22:54.917114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.783 [2024-07-15 18:22:54.917927] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.783 [2024-07-15 18:22:54.917934] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.041 18:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.041 18:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:03.041 18:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:03.041 18:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.298 BaseBdev1_malloc 00:09:03.298 18:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:03.556 true 00:09:03.556 18:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.828 [2024-07-15 18:22:56.001653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.828 [2024-07-15 18:22:56.001727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.828 [2024-07-15 18:22:56.001767] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3868b3c34780 00:09:03.828 [2024-07-15 18:22:56.001776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.828 [2024-07-15 18:22:56.002457] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.828 [2024-07-15 18:22:56.002495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:03.828 BaseBdev1 00:09:03.829 18:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:03.829 18:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:04.101 BaseBdev2_malloc 00:09:04.101 18:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:04.359 true 00:09:04.359 18:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:04.617 [2024-07-15 18:22:56.873646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:04.617 [2024-07-15 18:22:56.873711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.617 [2024-07-15 18:22:56.873738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3868b3c34c80 00:09:04.617 [2024-07-15 18:22:56.873747] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.617 [2024-07-15 18:22:56.874434] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.617 [2024-07-15 18:22:56.874461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:04.617 BaseBdev2 00:09:04.617 18:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:04.876 [2024-07-15 18:22:57.129653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.876 [2024-07-15 18:22:57.130289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.876 [2024-07-15 18:22:57.130356] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3868b3c34f00 00:09:04.876 [2024-07-15 18:22:57.130363] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:04.876 [2024-07-15 18:22:57.130395] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3868b3ca0e20 00:09:04.876 [2024-07-15 18:22:57.130484] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3868b3c34f00 00:09:04.876 [2024-07-15 18:22:57.130489] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3868b3c34f00 00:09:04.876 [2024-07-15 18:22:57.130525] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.876 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.135 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:05.135 "name": "raid_bdev1", 00:09:05.135 "uuid": "42681ebc-42d7-11ef-9ade-d5fc5159efa5", 00:09:05.135 "strip_size_kb": 0, 00:09:05.135 "state": "online", 00:09:05.135 "raid_level": "raid1", 00:09:05.135 "superblock": true, 00:09:05.135 "num_base_bdevs": 2, 00:09:05.135 "num_base_bdevs_discovered": 2, 00:09:05.135 "num_base_bdevs_operational": 2, 00:09:05.135 "base_bdevs_list": [ 00:09:05.135 { 00:09:05.135 "name": "BaseBdev1", 00:09:05.135 "uuid": "127783f2-1283-4f51-baae-098d170649b7", 00:09:05.135 "is_configured": true, 00:09:05.135 "data_offset": 2048, 00:09:05.135 "data_size": 63488 00:09:05.135 }, 00:09:05.136 { 00:09:05.136 "name": "BaseBdev2", 00:09:05.136 "uuid": "0719230d-b8b5-8650-98ce-14a4693f75e2", 00:09:05.136 "is_configured": true, 00:09:05.136 "data_offset": 2048, 00:09:05.136 "data_size": 63488 00:09:05.136 } 00:09:05.136 ] 00:09:05.136 }' 00:09:05.136 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:05.136 18:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.703 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:05.703 18:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:05.703 [2024-07-15 18:22:57.953865] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3868b3ca0ec0 00:09:06.640 18:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:06.899 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.900 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.159 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:07.159 "name": "raid_bdev1", 00:09:07.159 "uuid": "42681ebc-42d7-11ef-9ade-d5fc5159efa5", 00:09:07.159 "strip_size_kb": 0, 00:09:07.159 "state": "online", 00:09:07.159 "raid_level": "raid1", 00:09:07.159 "superblock": true, 00:09:07.159 "num_base_bdevs": 2, 00:09:07.159 "num_base_bdevs_discovered": 2, 00:09:07.159 "num_base_bdevs_operational": 2, 00:09:07.159 "base_bdevs_list": [ 00:09:07.159 { 00:09:07.159 "name": "BaseBdev1", 00:09:07.159 "uuid": "127783f2-1283-4f51-baae-098d170649b7", 00:09:07.159 "is_configured": true, 00:09:07.159 "data_offset": 2048, 00:09:07.159 "data_size": 63488 00:09:07.159 }, 00:09:07.159 { 00:09:07.159 "name": "BaseBdev2", 00:09:07.159 "uuid": "0719230d-b8b5-8650-98ce-14a4693f75e2", 00:09:07.159 "is_configured": true, 00:09:07.159 "data_offset": 2048, 00:09:07.159 "data_size": 63488 00:09:07.159 } 00:09:07.159 ] 00:09:07.159 }' 00:09:07.159 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:07.159 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.418 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:07.677 [2024-07-15 18:22:59.912588] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.677 [2024-07-15 18:22:59.912618] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.677 [2024-07-15 18:22:59.913001] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.677 [2024-07-15 18:22:59.913010] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.677 [2024-07-15 18:22:59.913024] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.677 [2024-07-15 18:22:59.913028] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3868b3c34f00 name raid_bdev1, state offline 00:09:07.677 0 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51726 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 51726 ']' 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 51726 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51726 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:07.677 killing process with pid 51726 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51726' 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 51726 00:09:07.677 [2024-07-15 18:22:59.942932] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.677 18:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 51726 00:09:07.677 [2024-07-15 18:22:59.957524] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.f0uWUN713Q 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:07.937 00:09:07.937 real 0m6.040s 00:09:07.937 user 0m9.259s 00:09:07.937 sys 0m1.162s 00:09:07.937 ************************************ 00:09:07.937 END TEST raid_read_error_test 00:09:07.937 ************************************ 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.937 18:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.937 18:23:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:07.937 18:23:00 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:07.937 18:23:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:07.937 18:23:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.937 18:23:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.937 ************************************ 00:09:07.937 START TEST raid_write_error_test 00:09:07.937 ************************************ 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Y5xPtSNHQl 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51850 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51850 /var/tmp/spdk-raid.sock 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 51850 ']' 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:07.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.937 18:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.937 [2024-07-15 18:23:00.253722] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:09:07.937 [2024-07-15 18:23:00.253899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:08.504 EAL: TSC is not safe to use in SMP mode 00:09:08.504 EAL: TSC is not invariant 00:09:08.504 [2024-07-15 18:23:00.853472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.768 [2024-07-15 18:23:00.965199] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:08.768 [2024-07-15 18:23:00.967452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.768 [2024-07-15 18:23:00.968278] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.768 [2024-07-15 18:23:00.968294] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.027 18:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.027 18:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:09.027 18:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:09.027 18:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:09.285 BaseBdev1_malloc 00:09:09.286 18:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:09.544 true 00:09:09.544 18:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:09.802 [2024-07-15 18:23:02.136375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:09.802 [2024-07-15 18:23:02.136466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.802 [2024-07-15 18:23:02.136506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1274ba434780 00:09:09.802 [2024-07-15 18:23:02.136517] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.802 [2024-07-15 18:23:02.137319] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.802 [2024-07-15 18:23:02.137346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:09.802 BaseBdev1 00:09:09.802 18:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:09.802 18:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.061 BaseBdev2_malloc 00:09:10.061 18:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:10.320 true 00:09:10.320 18:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.579 [2024-07-15 18:23:02.924380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.579 [2024-07-15 18:23:02.924463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.579 [2024-07-15 18:23:02.924494] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1274ba434c80 00:09:10.579 [2024-07-15 18:23:02.924504] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.579 [2024-07-15 18:23:02.925340] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.579 [2024-07-15 18:23:02.925368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.579 BaseBdev2 00:09:10.579 18:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:10.839 [2024-07-15 18:23:03.172393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.839 [2024-07-15 18:23:03.173086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.839 [2024-07-15 18:23:03.173159] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1274ba434f00 00:09:10.839 [2024-07-15 18:23:03.173174] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:10.839 [2024-07-15 18:23:03.173209] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1274ba4a0e20 00:09:10.839 [2024-07-15 18:23:03.173310] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1274ba434f00 00:09:10.839 [2024-07-15 18:23:03.173315] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1274ba434f00 00:09:10.839 [2024-07-15 18:23:03.173349] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.839 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.097 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:11.097 "name": "raid_bdev1", 00:09:11.097 "uuid": "46022b32-42d7-11ef-9ade-d5fc5159efa5", 00:09:11.097 "strip_size_kb": 0, 00:09:11.097 "state": "online", 00:09:11.097 "raid_level": "raid1", 00:09:11.097 "superblock": true, 00:09:11.097 "num_base_bdevs": 2, 00:09:11.097 "num_base_bdevs_discovered": 2, 00:09:11.097 "num_base_bdevs_operational": 2, 00:09:11.097 "base_bdevs_list": [ 00:09:11.097 { 00:09:11.097 "name": "BaseBdev1", 00:09:11.097 "uuid": "06071177-35f0-da58-b18a-a41ee5adbf80", 00:09:11.097 "is_configured": true, 00:09:11.097 "data_offset": 2048, 00:09:11.097 "data_size": 63488 00:09:11.097 }, 00:09:11.097 { 00:09:11.097 "name": "BaseBdev2", 00:09:11.097 "uuid": "b0ea70a0-4a95-f758-a934-7bd5a4079052", 00:09:11.097 "is_configured": true, 00:09:11.097 "data_offset": 2048, 00:09:11.097 "data_size": 63488 00:09:11.097 } 00:09:11.097 ] 00:09:11.097 }' 00:09:11.097 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:11.097 18:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.665 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:11.665 18:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:11.665 [2024-07-15 18:23:03.892628] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1274ba4a0ec0 00:09:12.602 18:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:12.861 [2024-07-15 18:23:05.111299] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:12.861 [2024-07-15 18:23:05.111368] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.861 [2024-07-15 18:23:05.111494] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1274ba4a0ec0 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.861 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.120 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:13.120 "name": "raid_bdev1", 00:09:13.120 "uuid": "46022b32-42d7-11ef-9ade-d5fc5159efa5", 00:09:13.120 "strip_size_kb": 0, 00:09:13.120 "state": "online", 00:09:13.120 "raid_level": "raid1", 00:09:13.120 "superblock": true, 00:09:13.120 "num_base_bdevs": 2, 00:09:13.120 "num_base_bdevs_discovered": 1, 00:09:13.120 "num_base_bdevs_operational": 1, 00:09:13.120 "base_bdevs_list": [ 00:09:13.120 { 00:09:13.120 "name": null, 00:09:13.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.120 "is_configured": false, 00:09:13.120 "data_offset": 2048, 00:09:13.120 "data_size": 63488 00:09:13.120 }, 00:09:13.120 { 00:09:13.120 "name": "BaseBdev2", 00:09:13.120 "uuid": "b0ea70a0-4a95-f758-a934-7bd5a4079052", 00:09:13.120 "is_configured": true, 00:09:13.120 "data_offset": 2048, 00:09:13.120 "data_size": 63488 00:09:13.120 } 00:09:13.120 ] 00:09:13.120 }' 00:09:13.120 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:13.120 18:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.687 18:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:13.687 [2024-07-15 18:23:06.025613] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.687 [2024-07-15 18:23:06.025642] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.687 [2024-07-15 18:23:06.025978] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.687 [2024-07-15 18:23:06.025988] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.687 [2024-07-15 18:23:06.025999] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.687 [2024-07-15 18:23:06.026004] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1274ba434f00 name raid_bdev1, state offline 00:09:13.687 0 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51850 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 51850 ']' 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 51850 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51850 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:13.687 killing process with pid 51850 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51850' 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 51850 00:09:13.687 [2024-07-15 18:23:06.059001] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.687 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 51850 00:09:13.946 [2024-07-15 18:23:06.073423] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Y5xPtSNHQl 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:13.946 00:09:13.946 real 0m6.072s 00:09:13.946 user 0m9.216s 00:09:13.946 sys 0m1.148s 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.946 18:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.946 ************************************ 00:09:13.946 END TEST raid_write_error_test 00:09:13.946 ************************************ 00:09:14.206 18:23:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:14.206 18:23:06 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:14.206 18:23:06 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:14.206 18:23:06 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:14.206 18:23:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:14.206 18:23:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.206 18:23:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.206 ************************************ 00:09:14.206 START TEST raid_state_function_test 00:09:14.206 ************************************ 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51976 00:09:14.206 Process raid pid: 51976 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51976' 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51976 /var/tmp/spdk-raid.sock 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 51976 ']' 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.206 18:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.206 [2024-07-15 18:23:06.365394] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:09:14.206 [2024-07-15 18:23:06.365659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:14.773 EAL: TSC is not safe to use in SMP mode 00:09:14.773 EAL: TSC is not invariant 00:09:14.773 [2024-07-15 18:23:06.976540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.773 [2024-07-15 18:23:07.087383] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:14.773 [2024-07-15 18:23:07.089585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.773 [2024-07-15 18:23:07.090419] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.773 [2024-07-15 18:23:07.090435] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.032 18:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.032 18:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:15.032 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:15.599 [2024-07-15 18:23:07.699033] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.599 [2024-07-15 18:23:07.699092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.599 [2024-07-15 18:23:07.699098] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.599 [2024-07-15 18:23:07.699107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.599 [2024-07-15 18:23:07.699111] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.599 [2024-07-15 18:23:07.699119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.599 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.599 "name": "Existed_Raid", 00:09:15.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.599 "strip_size_kb": 64, 00:09:15.599 "state": "configuring", 00:09:15.599 "raid_level": "raid0", 00:09:15.599 "superblock": false, 00:09:15.599 "num_base_bdevs": 3, 00:09:15.599 "num_base_bdevs_discovered": 0, 00:09:15.599 "num_base_bdevs_operational": 3, 00:09:15.599 "base_bdevs_list": [ 00:09:15.599 { 00:09:15.599 "name": "BaseBdev1", 00:09:15.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.599 "is_configured": false, 00:09:15.599 "data_offset": 0, 00:09:15.599 "data_size": 0 00:09:15.599 }, 00:09:15.599 { 00:09:15.599 "name": "BaseBdev2", 00:09:15.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.599 "is_configured": false, 00:09:15.599 "data_offset": 0, 00:09:15.599 "data_size": 0 00:09:15.599 }, 00:09:15.599 { 00:09:15.599 "name": "BaseBdev3", 00:09:15.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.600 "is_configured": false, 00:09:15.600 "data_offset": 0, 00:09:15.600 "data_size": 0 00:09:15.600 } 00:09:15.600 ] 00:09:15.600 }' 00:09:15.600 18:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.600 18:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.166 18:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:16.166 [2024-07-15 18:23:08.495042] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.166 [2024-07-15 18:23:08.495078] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5f3d5234500 name Existed_Raid, state configuring 00:09:16.166 18:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:16.423 [2024-07-15 18:23:08.763055] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.424 [2024-07-15 18:23:08.763113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.424 [2024-07-15 18:23:08.763120] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.424 [2024-07-15 18:23:08.763129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.424 [2024-07-15 18:23:08.763133] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.424 [2024-07-15 18:23:08.763149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.424 18:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.682 [2024-07-15 18:23:09.000102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.682 BaseBdev1 00:09:16.682 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:16.682 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:16.682 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:16.682 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:16.683 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:16.683 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:16.683 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:16.941 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.200 [ 00:09:17.200 { 00:09:17.200 "name": "BaseBdev1", 00:09:17.200 "aliases": [ 00:09:17.200 "497b40e0-42d7-11ef-9ade-d5fc5159efa5" 00:09:17.200 ], 00:09:17.200 "product_name": "Malloc disk", 00:09:17.200 "block_size": 512, 00:09:17.200 "num_blocks": 65536, 00:09:17.200 "uuid": "497b40e0-42d7-11ef-9ade-d5fc5159efa5", 00:09:17.200 "assigned_rate_limits": { 00:09:17.200 "rw_ios_per_sec": 0, 00:09:17.200 "rw_mbytes_per_sec": 0, 00:09:17.200 "r_mbytes_per_sec": 0, 00:09:17.200 "w_mbytes_per_sec": 0 00:09:17.200 }, 00:09:17.200 "claimed": true, 00:09:17.200 "claim_type": "exclusive_write", 00:09:17.200 "zoned": false, 00:09:17.200 "supported_io_types": { 00:09:17.200 "read": true, 00:09:17.200 "write": true, 00:09:17.200 "unmap": true, 00:09:17.200 "flush": true, 00:09:17.200 "reset": true, 00:09:17.200 "nvme_admin": false, 00:09:17.200 "nvme_io": false, 00:09:17.200 "nvme_io_md": false, 00:09:17.200 "write_zeroes": true, 00:09:17.200 "zcopy": true, 00:09:17.200 "get_zone_info": false, 00:09:17.200 "zone_management": false, 00:09:17.200 "zone_append": false, 00:09:17.200 "compare": false, 00:09:17.200 "compare_and_write": false, 00:09:17.200 "abort": true, 00:09:17.200 "seek_hole": false, 00:09:17.200 "seek_data": false, 00:09:17.200 "copy": true, 00:09:17.200 "nvme_iov_md": false 00:09:17.200 }, 00:09:17.200 "memory_domains": [ 00:09:17.200 { 00:09:17.200 "dma_device_id": "system", 00:09:17.200 "dma_device_type": 1 00:09:17.200 }, 00:09:17.200 { 00:09:17.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.200 "dma_device_type": 2 00:09:17.200 } 00:09:17.200 ], 00:09:17.200 "driver_specific": {} 00:09:17.200 } 00:09:17.200 ] 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.200 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.766 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:17.766 "name": "Existed_Raid", 00:09:17.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.766 "strip_size_kb": 64, 00:09:17.766 "state": "configuring", 00:09:17.766 "raid_level": "raid0", 00:09:17.766 "superblock": false, 00:09:17.766 "num_base_bdevs": 3, 00:09:17.766 "num_base_bdevs_discovered": 1, 00:09:17.766 "num_base_bdevs_operational": 3, 00:09:17.766 "base_bdevs_list": [ 00:09:17.766 { 00:09:17.766 "name": "BaseBdev1", 00:09:17.766 "uuid": "497b40e0-42d7-11ef-9ade-d5fc5159efa5", 00:09:17.766 "is_configured": true, 00:09:17.766 "data_offset": 0, 00:09:17.766 "data_size": 65536 00:09:17.766 }, 00:09:17.766 { 00:09:17.766 "name": "BaseBdev2", 00:09:17.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.766 "is_configured": false, 00:09:17.766 "data_offset": 0, 00:09:17.766 "data_size": 0 00:09:17.766 }, 00:09:17.766 { 00:09:17.766 "name": "BaseBdev3", 00:09:17.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.766 "is_configured": false, 00:09:17.766 "data_offset": 0, 00:09:17.766 "data_size": 0 00:09:17.766 } 00:09:17.766 ] 00:09:17.766 }' 00:09:17.766 18:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:17.766 18:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.766 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:18.024 [2024-07-15 18:23:10.355164] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.024 [2024-07-15 18:23:10.355221] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5f3d5234500 name Existed_Raid, state configuring 00:09:18.024 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:18.282 [2024-07-15 18:23:10.587115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.282 [2024-07-15 18:23:10.587932] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.282 [2024-07-15 18:23:10.587974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.282 [2024-07-15 18:23:10.587980] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.282 [2024-07-15 18:23:10.587989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.282 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.540 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:18.540 "name": "Existed_Raid", 00:09:18.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.540 "strip_size_kb": 64, 00:09:18.540 "state": "configuring", 00:09:18.540 "raid_level": "raid0", 00:09:18.540 "superblock": false, 00:09:18.540 "num_base_bdevs": 3, 00:09:18.540 "num_base_bdevs_discovered": 1, 00:09:18.540 "num_base_bdevs_operational": 3, 00:09:18.540 "base_bdevs_list": [ 00:09:18.540 { 00:09:18.540 "name": "BaseBdev1", 00:09:18.540 "uuid": "497b40e0-42d7-11ef-9ade-d5fc5159efa5", 00:09:18.540 "is_configured": true, 00:09:18.540 "data_offset": 0, 00:09:18.540 "data_size": 65536 00:09:18.540 }, 00:09:18.540 { 00:09:18.540 "name": "BaseBdev2", 00:09:18.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.540 "is_configured": false, 00:09:18.540 "data_offset": 0, 00:09:18.540 "data_size": 0 00:09:18.540 }, 00:09:18.540 { 00:09:18.540 "name": "BaseBdev3", 00:09:18.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.540 "is_configured": false, 00:09:18.540 "data_offset": 0, 00:09:18.540 "data_size": 0 00:09:18.540 } 00:09:18.540 ] 00:09:18.540 }' 00:09:18.540 18:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:18.540 18:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.797 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.363 [2024-07-15 18:23:11.447280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.363 BaseBdev2 00:09:19.363 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:19.363 18:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:19.363 18:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:19.363 18:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:19.363 18:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:19.363 18:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:19.363 18:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:19.363 18:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.620 [ 00:09:19.620 { 00:09:19.620 "name": "BaseBdev2", 00:09:19.620 "aliases": [ 00:09:19.620 "4af0cc0a-42d7-11ef-9ade-d5fc5159efa5" 00:09:19.620 ], 00:09:19.620 "product_name": "Malloc disk", 00:09:19.620 "block_size": 512, 00:09:19.620 "num_blocks": 65536, 00:09:19.620 "uuid": "4af0cc0a-42d7-11ef-9ade-d5fc5159efa5", 00:09:19.620 "assigned_rate_limits": { 00:09:19.620 "rw_ios_per_sec": 0, 00:09:19.620 "rw_mbytes_per_sec": 0, 00:09:19.620 "r_mbytes_per_sec": 0, 00:09:19.620 "w_mbytes_per_sec": 0 00:09:19.620 }, 00:09:19.620 "claimed": true, 00:09:19.620 "claim_type": "exclusive_write", 00:09:19.620 "zoned": false, 00:09:19.620 "supported_io_types": { 00:09:19.620 "read": true, 00:09:19.620 "write": true, 00:09:19.620 "unmap": true, 00:09:19.620 "flush": true, 00:09:19.620 "reset": true, 00:09:19.620 "nvme_admin": false, 00:09:19.620 "nvme_io": false, 00:09:19.620 "nvme_io_md": false, 00:09:19.620 "write_zeroes": true, 00:09:19.620 "zcopy": true, 00:09:19.620 "get_zone_info": false, 00:09:19.620 "zone_management": false, 00:09:19.620 "zone_append": false, 00:09:19.620 "compare": false, 00:09:19.620 "compare_and_write": false, 00:09:19.620 "abort": true, 00:09:19.620 "seek_hole": false, 00:09:19.620 "seek_data": false, 00:09:19.620 "copy": true, 00:09:19.620 "nvme_iov_md": false 00:09:19.620 }, 00:09:19.620 "memory_domains": [ 00:09:19.620 { 00:09:19.620 "dma_device_id": "system", 00:09:19.620 "dma_device_type": 1 00:09:19.620 }, 00:09:19.620 { 00:09:19.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.620 "dma_device_type": 2 00:09:19.620 } 00:09:19.620 ], 00:09:19.620 "driver_specific": {} 00:09:19.620 } 00:09:19.620 ] 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.620 18:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.877 18:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:19.878 "name": "Existed_Raid", 00:09:19.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.878 "strip_size_kb": 64, 00:09:19.878 "state": "configuring", 00:09:19.878 "raid_level": "raid0", 00:09:19.878 "superblock": false, 00:09:19.878 "num_base_bdevs": 3, 00:09:19.878 "num_base_bdevs_discovered": 2, 00:09:19.878 "num_base_bdevs_operational": 3, 00:09:19.878 "base_bdevs_list": [ 00:09:19.878 { 00:09:19.878 "name": "BaseBdev1", 00:09:19.878 "uuid": "497b40e0-42d7-11ef-9ade-d5fc5159efa5", 00:09:19.878 "is_configured": true, 00:09:19.878 "data_offset": 0, 00:09:19.878 "data_size": 65536 00:09:19.878 }, 00:09:19.878 { 00:09:19.878 "name": "BaseBdev2", 00:09:19.878 "uuid": "4af0cc0a-42d7-11ef-9ade-d5fc5159efa5", 00:09:19.878 "is_configured": true, 00:09:19.878 "data_offset": 0, 00:09:19.878 "data_size": 65536 00:09:19.878 }, 00:09:19.878 { 00:09:19.878 "name": "BaseBdev3", 00:09:19.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.878 "is_configured": false, 00:09:19.878 "data_offset": 0, 00:09:19.878 "data_size": 0 00:09:19.878 } 00:09:19.878 ] 00:09:19.878 }' 00:09:19.878 18:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:19.878 18:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.443 18:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:20.443 [2024-07-15 18:23:12.779346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.443 [2024-07-15 18:23:12.779379] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5f3d5234a00 00:09:20.443 [2024-07-15 18:23:12.779384] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:20.443 [2024-07-15 18:23:12.779416] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5f3d5297e20 00:09:20.443 [2024-07-15 18:23:12.779513] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5f3d5234a00 00:09:20.443 [2024-07-15 18:23:12.779517] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x5f3d5234a00 00:09:20.443 [2024-07-15 18:23:12.779552] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.443 BaseBdev3 00:09:20.443 18:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:20.443 18:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:20.443 18:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:20.443 18:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:20.443 18:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:20.443 18:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:20.443 18:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:20.700 18:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.959 [ 00:09:20.959 { 00:09:20.959 "name": "BaseBdev3", 00:09:20.959 "aliases": [ 00:09:20.959 "4bbc0e49-42d7-11ef-9ade-d5fc5159efa5" 00:09:20.959 ], 00:09:20.959 "product_name": "Malloc disk", 00:09:20.959 "block_size": 512, 00:09:20.959 "num_blocks": 65536, 00:09:20.959 "uuid": "4bbc0e49-42d7-11ef-9ade-d5fc5159efa5", 00:09:20.959 "assigned_rate_limits": { 00:09:20.959 "rw_ios_per_sec": 0, 00:09:20.959 "rw_mbytes_per_sec": 0, 00:09:20.959 "r_mbytes_per_sec": 0, 00:09:20.959 "w_mbytes_per_sec": 0 00:09:20.959 }, 00:09:20.959 "claimed": true, 00:09:20.959 "claim_type": "exclusive_write", 00:09:20.959 "zoned": false, 00:09:20.959 "supported_io_types": { 00:09:20.959 "read": true, 00:09:20.959 "write": true, 00:09:20.959 "unmap": true, 00:09:20.959 "flush": true, 00:09:20.959 "reset": true, 00:09:20.959 "nvme_admin": false, 00:09:20.959 "nvme_io": false, 00:09:20.959 "nvme_io_md": false, 00:09:20.959 "write_zeroes": true, 00:09:20.959 "zcopy": true, 00:09:20.959 "get_zone_info": false, 00:09:20.959 "zone_management": false, 00:09:20.959 "zone_append": false, 00:09:20.959 "compare": false, 00:09:20.959 "compare_and_write": false, 00:09:20.959 "abort": true, 00:09:20.959 "seek_hole": false, 00:09:20.959 "seek_data": false, 00:09:20.959 "copy": true, 00:09:20.959 "nvme_iov_md": false 00:09:20.959 }, 00:09:20.959 "memory_domains": [ 00:09:20.959 { 00:09:20.959 "dma_device_id": "system", 00:09:20.959 "dma_device_type": 1 00:09:20.959 }, 00:09:20.959 { 00:09:20.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.959 "dma_device_type": 2 00:09:20.959 } 00:09:20.959 ], 00:09:20.959 "driver_specific": {} 00:09:20.959 } 00:09:20.959 ] 00:09:20.959 18:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:20.959 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:20.959 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:20.959 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:20.959 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:20.959 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:20.959 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:21.217 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:21.217 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:21.217 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:21.217 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:21.217 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:21.217 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:21.217 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.217 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.475 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:21.475 "name": "Existed_Raid", 00:09:21.475 "uuid": "4bbc14eb-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.475 "strip_size_kb": 64, 00:09:21.475 "state": "online", 00:09:21.475 "raid_level": "raid0", 00:09:21.475 "superblock": false, 00:09:21.475 "num_base_bdevs": 3, 00:09:21.475 "num_base_bdevs_discovered": 3, 00:09:21.475 "num_base_bdevs_operational": 3, 00:09:21.475 "base_bdevs_list": [ 00:09:21.475 { 00:09:21.475 "name": "BaseBdev1", 00:09:21.475 "uuid": "497b40e0-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.475 "is_configured": true, 00:09:21.475 "data_offset": 0, 00:09:21.475 "data_size": 65536 00:09:21.475 }, 00:09:21.475 { 00:09:21.475 "name": "BaseBdev2", 00:09:21.475 "uuid": "4af0cc0a-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.475 "is_configured": true, 00:09:21.475 "data_offset": 0, 00:09:21.475 "data_size": 65536 00:09:21.475 }, 00:09:21.475 { 00:09:21.475 "name": "BaseBdev3", 00:09:21.475 "uuid": "4bbc0e49-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.475 "is_configured": true, 00:09:21.475 "data_offset": 0, 00:09:21.475 "data_size": 65536 00:09:21.475 } 00:09:21.475 ] 00:09:21.475 }' 00:09:21.475 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:21.475 18:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.735 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.735 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:21.735 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:21.735 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:21.735 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:21.735 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:21.735 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:21.735 18:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:21.994 [2024-07-15 18:23:14.163284] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.994 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:21.994 "name": "Existed_Raid", 00:09:21.994 "aliases": [ 00:09:21.994 "4bbc14eb-42d7-11ef-9ade-d5fc5159efa5" 00:09:21.994 ], 00:09:21.994 "product_name": "Raid Volume", 00:09:21.994 "block_size": 512, 00:09:21.994 "num_blocks": 196608, 00:09:21.994 "uuid": "4bbc14eb-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.994 "assigned_rate_limits": { 00:09:21.994 "rw_ios_per_sec": 0, 00:09:21.994 "rw_mbytes_per_sec": 0, 00:09:21.994 "r_mbytes_per_sec": 0, 00:09:21.994 "w_mbytes_per_sec": 0 00:09:21.994 }, 00:09:21.994 "claimed": false, 00:09:21.994 "zoned": false, 00:09:21.994 "supported_io_types": { 00:09:21.994 "read": true, 00:09:21.994 "write": true, 00:09:21.994 "unmap": true, 00:09:21.994 "flush": true, 00:09:21.994 "reset": true, 00:09:21.994 "nvme_admin": false, 00:09:21.994 "nvme_io": false, 00:09:21.994 "nvme_io_md": false, 00:09:21.994 "write_zeroes": true, 00:09:21.994 "zcopy": false, 00:09:21.994 "get_zone_info": false, 00:09:21.994 "zone_management": false, 00:09:21.994 "zone_append": false, 00:09:21.994 "compare": false, 00:09:21.994 "compare_and_write": false, 00:09:21.994 "abort": false, 00:09:21.994 "seek_hole": false, 00:09:21.994 "seek_data": false, 00:09:21.994 "copy": false, 00:09:21.994 "nvme_iov_md": false 00:09:21.994 }, 00:09:21.994 "memory_domains": [ 00:09:21.994 { 00:09:21.994 "dma_device_id": "system", 00:09:21.994 "dma_device_type": 1 00:09:21.994 }, 00:09:21.994 { 00:09:21.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.994 "dma_device_type": 2 00:09:21.994 }, 00:09:21.994 { 00:09:21.994 "dma_device_id": "system", 00:09:21.994 "dma_device_type": 1 00:09:21.994 }, 00:09:21.994 { 00:09:21.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.994 "dma_device_type": 2 00:09:21.994 }, 00:09:21.994 { 00:09:21.994 "dma_device_id": "system", 00:09:21.994 "dma_device_type": 1 00:09:21.994 }, 00:09:21.994 { 00:09:21.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.994 "dma_device_type": 2 00:09:21.994 } 00:09:21.994 ], 00:09:21.994 "driver_specific": { 00:09:21.994 "raid": { 00:09:21.994 "uuid": "4bbc14eb-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.994 "strip_size_kb": 64, 00:09:21.994 "state": "online", 00:09:21.994 "raid_level": "raid0", 00:09:21.994 "superblock": false, 00:09:21.994 "num_base_bdevs": 3, 00:09:21.994 "num_base_bdevs_discovered": 3, 00:09:21.994 "num_base_bdevs_operational": 3, 00:09:21.994 "base_bdevs_list": [ 00:09:21.994 { 00:09:21.994 "name": "BaseBdev1", 00:09:21.994 "uuid": "497b40e0-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.994 "is_configured": true, 00:09:21.994 "data_offset": 0, 00:09:21.994 "data_size": 65536 00:09:21.994 }, 00:09:21.994 { 00:09:21.994 "name": "BaseBdev2", 00:09:21.994 "uuid": "4af0cc0a-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.994 "is_configured": true, 00:09:21.994 "data_offset": 0, 00:09:21.994 "data_size": 65536 00:09:21.994 }, 00:09:21.994 { 00:09:21.994 "name": "BaseBdev3", 00:09:21.994 "uuid": "4bbc0e49-42d7-11ef-9ade-d5fc5159efa5", 00:09:21.994 "is_configured": true, 00:09:21.994 "data_offset": 0, 00:09:21.994 "data_size": 65536 00:09:21.994 } 00:09:21.995 ] 00:09:21.995 } 00:09:21.995 } 00:09:21.995 }' 00:09:21.995 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.995 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:21.995 BaseBdev2 00:09:21.995 BaseBdev3' 00:09:21.995 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:21.995 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:21.995 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:22.253 "name": "BaseBdev1", 00:09:22.253 "aliases": [ 00:09:22.253 "497b40e0-42d7-11ef-9ade-d5fc5159efa5" 00:09:22.253 ], 00:09:22.253 "product_name": "Malloc disk", 00:09:22.253 "block_size": 512, 00:09:22.253 "num_blocks": 65536, 00:09:22.253 "uuid": "497b40e0-42d7-11ef-9ade-d5fc5159efa5", 00:09:22.253 "assigned_rate_limits": { 00:09:22.253 "rw_ios_per_sec": 0, 00:09:22.253 "rw_mbytes_per_sec": 0, 00:09:22.253 "r_mbytes_per_sec": 0, 00:09:22.253 "w_mbytes_per_sec": 0 00:09:22.253 }, 00:09:22.253 "claimed": true, 00:09:22.253 "claim_type": "exclusive_write", 00:09:22.253 "zoned": false, 00:09:22.253 "supported_io_types": { 00:09:22.253 "read": true, 00:09:22.253 "write": true, 00:09:22.253 "unmap": true, 00:09:22.253 "flush": true, 00:09:22.253 "reset": true, 00:09:22.253 "nvme_admin": false, 00:09:22.253 "nvme_io": false, 00:09:22.253 "nvme_io_md": false, 00:09:22.253 "write_zeroes": true, 00:09:22.253 "zcopy": true, 00:09:22.253 "get_zone_info": false, 00:09:22.253 "zone_management": false, 00:09:22.253 "zone_append": false, 00:09:22.253 "compare": false, 00:09:22.253 "compare_and_write": false, 00:09:22.253 "abort": true, 00:09:22.253 "seek_hole": false, 00:09:22.253 "seek_data": false, 00:09:22.253 "copy": true, 00:09:22.253 "nvme_iov_md": false 00:09:22.253 }, 00:09:22.253 "memory_domains": [ 00:09:22.253 { 00:09:22.253 "dma_device_id": "system", 00:09:22.253 "dma_device_type": 1 00:09:22.253 }, 00:09:22.253 { 00:09:22.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.253 "dma_device_type": 2 00:09:22.253 } 00:09:22.253 ], 00:09:22.253 "driver_specific": {} 00:09:22.253 }' 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:22.253 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:22.512 "name": "BaseBdev2", 00:09:22.512 "aliases": [ 00:09:22.512 "4af0cc0a-42d7-11ef-9ade-d5fc5159efa5" 00:09:22.512 ], 00:09:22.512 "product_name": "Malloc disk", 00:09:22.512 "block_size": 512, 00:09:22.512 "num_blocks": 65536, 00:09:22.512 "uuid": "4af0cc0a-42d7-11ef-9ade-d5fc5159efa5", 00:09:22.512 "assigned_rate_limits": { 00:09:22.512 "rw_ios_per_sec": 0, 00:09:22.512 "rw_mbytes_per_sec": 0, 00:09:22.512 "r_mbytes_per_sec": 0, 00:09:22.512 "w_mbytes_per_sec": 0 00:09:22.512 }, 00:09:22.512 "claimed": true, 00:09:22.512 "claim_type": "exclusive_write", 00:09:22.512 "zoned": false, 00:09:22.512 "supported_io_types": { 00:09:22.512 "read": true, 00:09:22.512 "write": true, 00:09:22.512 "unmap": true, 00:09:22.512 "flush": true, 00:09:22.512 "reset": true, 00:09:22.512 "nvme_admin": false, 00:09:22.512 "nvme_io": false, 00:09:22.512 "nvme_io_md": false, 00:09:22.512 "write_zeroes": true, 00:09:22.512 "zcopy": true, 00:09:22.512 "get_zone_info": false, 00:09:22.512 "zone_management": false, 00:09:22.512 "zone_append": false, 00:09:22.512 "compare": false, 00:09:22.512 "compare_and_write": false, 00:09:22.512 "abort": true, 00:09:22.512 "seek_hole": false, 00:09:22.512 "seek_data": false, 00:09:22.512 "copy": true, 00:09:22.512 "nvme_iov_md": false 00:09:22.512 }, 00:09:22.512 "memory_domains": [ 00:09:22.512 { 00:09:22.512 "dma_device_id": "system", 00:09:22.512 "dma_device_type": 1 00:09:22.512 }, 00:09:22.512 { 00:09:22.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.512 "dma_device_type": 2 00:09:22.512 } 00:09:22.512 ], 00:09:22.512 "driver_specific": {} 00:09:22.512 }' 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:22.512 18:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:23.080 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:23.080 "name": "BaseBdev3", 00:09:23.080 "aliases": [ 00:09:23.080 "4bbc0e49-42d7-11ef-9ade-d5fc5159efa5" 00:09:23.080 ], 00:09:23.081 "product_name": "Malloc disk", 00:09:23.081 "block_size": 512, 00:09:23.081 "num_blocks": 65536, 00:09:23.081 "uuid": "4bbc0e49-42d7-11ef-9ade-d5fc5159efa5", 00:09:23.081 "assigned_rate_limits": { 00:09:23.081 "rw_ios_per_sec": 0, 00:09:23.081 "rw_mbytes_per_sec": 0, 00:09:23.081 "r_mbytes_per_sec": 0, 00:09:23.081 "w_mbytes_per_sec": 0 00:09:23.081 }, 00:09:23.081 "claimed": true, 00:09:23.081 "claim_type": "exclusive_write", 00:09:23.081 "zoned": false, 00:09:23.081 "supported_io_types": { 00:09:23.081 "read": true, 00:09:23.081 "write": true, 00:09:23.081 "unmap": true, 00:09:23.081 "flush": true, 00:09:23.081 "reset": true, 00:09:23.081 "nvme_admin": false, 00:09:23.081 "nvme_io": false, 00:09:23.081 "nvme_io_md": false, 00:09:23.081 "write_zeroes": true, 00:09:23.081 "zcopy": true, 00:09:23.081 "get_zone_info": false, 00:09:23.081 "zone_management": false, 00:09:23.081 "zone_append": false, 00:09:23.081 "compare": false, 00:09:23.081 "compare_and_write": false, 00:09:23.081 "abort": true, 00:09:23.081 "seek_hole": false, 00:09:23.081 "seek_data": false, 00:09:23.081 "copy": true, 00:09:23.081 "nvme_iov_md": false 00:09:23.081 }, 00:09:23.081 "memory_domains": [ 00:09:23.081 { 00:09:23.081 "dma_device_id": "system", 00:09:23.081 "dma_device_type": 1 00:09:23.081 }, 00:09:23.081 { 00:09:23.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.081 "dma_device_type": 2 00:09:23.081 } 00:09:23.081 ], 00:09:23.081 "driver_specific": {} 00:09:23.081 }' 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:23.081 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:23.340 [2024-07-15 18:23:15.467289] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.340 [2024-07-15 18:23:15.467315] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.340 [2024-07-15 18:23:15.467330] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.340 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.598 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:23.598 "name": "Existed_Raid", 00:09:23.598 "uuid": "4bbc14eb-42d7-11ef-9ade-d5fc5159efa5", 00:09:23.598 "strip_size_kb": 64, 00:09:23.598 "state": "offline", 00:09:23.598 "raid_level": "raid0", 00:09:23.598 "superblock": false, 00:09:23.598 "num_base_bdevs": 3, 00:09:23.598 "num_base_bdevs_discovered": 2, 00:09:23.598 "num_base_bdevs_operational": 2, 00:09:23.598 "base_bdevs_list": [ 00:09:23.598 { 00:09:23.598 "name": null, 00:09:23.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.598 "is_configured": false, 00:09:23.598 "data_offset": 0, 00:09:23.598 "data_size": 65536 00:09:23.598 }, 00:09:23.598 { 00:09:23.598 "name": "BaseBdev2", 00:09:23.598 "uuid": "4af0cc0a-42d7-11ef-9ade-d5fc5159efa5", 00:09:23.598 "is_configured": true, 00:09:23.598 "data_offset": 0, 00:09:23.598 "data_size": 65536 00:09:23.598 }, 00:09:23.598 { 00:09:23.598 "name": "BaseBdev3", 00:09:23.598 "uuid": "4bbc0e49-42d7-11ef-9ade-d5fc5159efa5", 00:09:23.598 "is_configured": true, 00:09:23.598 "data_offset": 0, 00:09:23.598 "data_size": 65536 00:09:23.598 } 00:09:23.598 ] 00:09:23.598 }' 00:09:23.598 18:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:23.598 18:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.857 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:23.857 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:23.857 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.857 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:24.115 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:24.115 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.115 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:24.373 [2024-07-15 18:23:16.561212] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.373 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:24.373 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:24.373 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:24.373 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.632 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:24.632 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.632 18:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:24.891 [2024-07-15 18:23:17.069466] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.891 [2024-07-15 18:23:17.069497] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5f3d5234a00 name Existed_Raid, state offline 00:09:24.891 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:24.891 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:24.891 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.891 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.150 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:25.150 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:25.150 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:25.150 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:25.150 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:25.150 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.409 BaseBdev2 00:09:25.409 18:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:25.409 18:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:25.409 18:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:25.409 18:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:25.409 18:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:25.409 18:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:25.409 18:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:25.668 18:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.928 [ 00:09:25.928 { 00:09:25.928 "name": "BaseBdev2", 00:09:25.928 "aliases": [ 00:09:25.928 "4e9d2441-42d7-11ef-9ade-d5fc5159efa5" 00:09:25.928 ], 00:09:25.928 "product_name": "Malloc disk", 00:09:25.928 "block_size": 512, 00:09:25.928 "num_blocks": 65536, 00:09:25.928 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:25.928 "assigned_rate_limits": { 00:09:25.928 "rw_ios_per_sec": 0, 00:09:25.928 "rw_mbytes_per_sec": 0, 00:09:25.928 "r_mbytes_per_sec": 0, 00:09:25.928 "w_mbytes_per_sec": 0 00:09:25.928 }, 00:09:25.928 "claimed": false, 00:09:25.928 "zoned": false, 00:09:25.928 "supported_io_types": { 00:09:25.928 "read": true, 00:09:25.928 "write": true, 00:09:25.928 "unmap": true, 00:09:25.928 "flush": true, 00:09:25.928 "reset": true, 00:09:25.928 "nvme_admin": false, 00:09:25.928 "nvme_io": false, 00:09:25.928 "nvme_io_md": false, 00:09:25.928 "write_zeroes": true, 00:09:25.928 "zcopy": true, 00:09:25.928 "get_zone_info": false, 00:09:25.928 "zone_management": false, 00:09:25.928 "zone_append": false, 00:09:25.928 "compare": false, 00:09:25.928 "compare_and_write": false, 00:09:25.928 "abort": true, 00:09:25.928 "seek_hole": false, 00:09:25.928 "seek_data": false, 00:09:25.928 "copy": true, 00:09:25.928 "nvme_iov_md": false 00:09:25.928 }, 00:09:25.928 "memory_domains": [ 00:09:25.928 { 00:09:25.928 "dma_device_id": "system", 00:09:25.928 "dma_device_type": 1 00:09:25.928 }, 00:09:25.928 { 00:09:25.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.928 "dma_device_type": 2 00:09:25.928 } 00:09:25.928 ], 00:09:25.928 "driver_specific": {} 00:09:25.928 } 00:09:25.928 ] 00:09:25.928 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:25.928 18:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:25.928 18:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:25.928 18:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.188 BaseBdev3 00:09:26.188 18:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:26.188 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:26.188 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:26.188 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:26.188 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:26.188 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:26.188 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:26.447 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.705 [ 00:09:26.705 { 00:09:26.705 "name": "BaseBdev3", 00:09:26.705 "aliases": [ 00:09:26.705 "4f19a883-42d7-11ef-9ade-d5fc5159efa5" 00:09:26.705 ], 00:09:26.705 "product_name": "Malloc disk", 00:09:26.705 "block_size": 512, 00:09:26.705 "num_blocks": 65536, 00:09:26.705 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:26.705 "assigned_rate_limits": { 00:09:26.705 "rw_ios_per_sec": 0, 00:09:26.705 "rw_mbytes_per_sec": 0, 00:09:26.705 "r_mbytes_per_sec": 0, 00:09:26.705 "w_mbytes_per_sec": 0 00:09:26.705 }, 00:09:26.705 "claimed": false, 00:09:26.705 "zoned": false, 00:09:26.705 "supported_io_types": { 00:09:26.705 "read": true, 00:09:26.705 "write": true, 00:09:26.705 "unmap": true, 00:09:26.705 "flush": true, 00:09:26.705 "reset": true, 00:09:26.705 "nvme_admin": false, 00:09:26.705 "nvme_io": false, 00:09:26.705 "nvme_io_md": false, 00:09:26.705 "write_zeroes": true, 00:09:26.705 "zcopy": true, 00:09:26.705 "get_zone_info": false, 00:09:26.705 "zone_management": false, 00:09:26.705 "zone_append": false, 00:09:26.705 "compare": false, 00:09:26.705 "compare_and_write": false, 00:09:26.705 "abort": true, 00:09:26.705 "seek_hole": false, 00:09:26.705 "seek_data": false, 00:09:26.705 "copy": true, 00:09:26.705 "nvme_iov_md": false 00:09:26.705 }, 00:09:26.705 "memory_domains": [ 00:09:26.706 { 00:09:26.706 "dma_device_id": "system", 00:09:26.706 "dma_device_type": 1 00:09:26.706 }, 00:09:26.706 { 00:09:26.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.706 "dma_device_type": 2 00:09:26.706 } 00:09:26.706 ], 00:09:26.706 "driver_specific": {} 00:09:26.706 } 00:09:26.706 ] 00:09:26.706 18:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:26.706 18:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:26.706 18:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:26.706 18:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:26.964 [2024-07-15 18:23:19.197837] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.964 [2024-07-15 18:23:19.197888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.964 [2024-07-15 18:23:19.197898] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.964 [2024-07-15 18:23:19.198470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.964 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.964 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:26.964 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:26.964 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:26.964 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:26.965 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:26.965 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:26.965 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:26.965 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:26.965 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:26.965 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.965 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.223 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:27.223 "name": "Existed_Raid", 00:09:27.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.223 "strip_size_kb": 64, 00:09:27.223 "state": "configuring", 00:09:27.223 "raid_level": "raid0", 00:09:27.223 "superblock": false, 00:09:27.223 "num_base_bdevs": 3, 00:09:27.223 "num_base_bdevs_discovered": 2, 00:09:27.223 "num_base_bdevs_operational": 3, 00:09:27.223 "base_bdevs_list": [ 00:09:27.223 { 00:09:27.223 "name": "BaseBdev1", 00:09:27.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.223 "is_configured": false, 00:09:27.223 "data_offset": 0, 00:09:27.223 "data_size": 0 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "name": "BaseBdev2", 00:09:27.223 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:27.223 "is_configured": true, 00:09:27.223 "data_offset": 0, 00:09:27.223 "data_size": 65536 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "name": "BaseBdev3", 00:09:27.223 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:27.223 "is_configured": true, 00:09:27.223 "data_offset": 0, 00:09:27.223 "data_size": 65536 00:09:27.223 } 00:09:27.223 ] 00:09:27.223 }' 00:09:27.223 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:27.223 18:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.482 18:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:27.742 [2024-07-15 18:23:20.009874] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.742 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.000 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:28.000 "name": "Existed_Raid", 00:09:28.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.000 "strip_size_kb": 64, 00:09:28.000 "state": "configuring", 00:09:28.000 "raid_level": "raid0", 00:09:28.000 "superblock": false, 00:09:28.000 "num_base_bdevs": 3, 00:09:28.000 "num_base_bdevs_discovered": 1, 00:09:28.000 "num_base_bdevs_operational": 3, 00:09:28.000 "base_bdevs_list": [ 00:09:28.000 { 00:09:28.000 "name": "BaseBdev1", 00:09:28.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.000 "is_configured": false, 00:09:28.000 "data_offset": 0, 00:09:28.000 "data_size": 0 00:09:28.000 }, 00:09:28.000 { 00:09:28.000 "name": null, 00:09:28.000 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:28.000 "is_configured": false, 00:09:28.000 "data_offset": 0, 00:09:28.000 "data_size": 65536 00:09:28.000 }, 00:09:28.000 { 00:09:28.000 "name": "BaseBdev3", 00:09:28.000 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:28.000 "is_configured": true, 00:09:28.000 "data_offset": 0, 00:09:28.000 "data_size": 65536 00:09:28.000 } 00:09:28.000 ] 00:09:28.000 }' 00:09:28.000 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:28.000 18:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.259 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.259 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.517 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:28.517 18:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.775 [2024-07-15 18:23:21.070046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.775 BaseBdev1 00:09:28.775 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:28.775 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:28.775 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:28.775 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:28.775 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:28.775 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:28.776 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:29.034 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.293 [ 00:09:29.293 { 00:09:29.293 "name": "BaseBdev1", 00:09:29.293 "aliases": [ 00:09:29.293 "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5" 00:09:29.293 ], 00:09:29.293 "product_name": "Malloc disk", 00:09:29.293 "block_size": 512, 00:09:29.293 "num_blocks": 65536, 00:09:29.293 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:29.293 "assigned_rate_limits": { 00:09:29.293 "rw_ios_per_sec": 0, 00:09:29.293 "rw_mbytes_per_sec": 0, 00:09:29.293 "r_mbytes_per_sec": 0, 00:09:29.293 "w_mbytes_per_sec": 0 00:09:29.293 }, 00:09:29.293 "claimed": true, 00:09:29.293 "claim_type": "exclusive_write", 00:09:29.293 "zoned": false, 00:09:29.293 "supported_io_types": { 00:09:29.293 "read": true, 00:09:29.293 "write": true, 00:09:29.293 "unmap": true, 00:09:29.293 "flush": true, 00:09:29.293 "reset": true, 00:09:29.293 "nvme_admin": false, 00:09:29.293 "nvme_io": false, 00:09:29.293 "nvme_io_md": false, 00:09:29.293 "write_zeroes": true, 00:09:29.293 "zcopy": true, 00:09:29.293 "get_zone_info": false, 00:09:29.293 "zone_management": false, 00:09:29.293 "zone_append": false, 00:09:29.293 "compare": false, 00:09:29.293 "compare_and_write": false, 00:09:29.293 "abort": true, 00:09:29.293 "seek_hole": false, 00:09:29.293 "seek_data": false, 00:09:29.293 "copy": true, 00:09:29.293 "nvme_iov_md": false 00:09:29.293 }, 00:09:29.293 "memory_domains": [ 00:09:29.293 { 00:09:29.293 "dma_device_id": "system", 00:09:29.293 "dma_device_type": 1 00:09:29.293 }, 00:09:29.293 { 00:09:29.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.293 "dma_device_type": 2 00:09:29.293 } 00:09:29.293 ], 00:09:29.293 "driver_specific": {} 00:09:29.293 } 00:09:29.293 ] 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.293 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.552 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:29.552 "name": "Existed_Raid", 00:09:29.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.552 "strip_size_kb": 64, 00:09:29.552 "state": "configuring", 00:09:29.552 "raid_level": "raid0", 00:09:29.552 "superblock": false, 00:09:29.552 "num_base_bdevs": 3, 00:09:29.552 "num_base_bdevs_discovered": 2, 00:09:29.552 "num_base_bdevs_operational": 3, 00:09:29.552 "base_bdevs_list": [ 00:09:29.552 { 00:09:29.552 "name": "BaseBdev1", 00:09:29.552 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:29.552 "is_configured": true, 00:09:29.552 "data_offset": 0, 00:09:29.552 "data_size": 65536 00:09:29.552 }, 00:09:29.552 { 00:09:29.552 "name": null, 00:09:29.552 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:29.552 "is_configured": false, 00:09:29.552 "data_offset": 0, 00:09:29.552 "data_size": 65536 00:09:29.552 }, 00:09:29.552 { 00:09:29.552 "name": "BaseBdev3", 00:09:29.552 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:29.552 "is_configured": true, 00:09:29.552 "data_offset": 0, 00:09:29.552 "data_size": 65536 00:09:29.552 } 00:09:29.552 ] 00:09:29.552 }' 00:09:29.552 18:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:29.552 18:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.810 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.810 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.068 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:30.068 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:30.326 [2024-07-15 18:23:22.689983] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:30.326 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:30.585 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.585 18:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.843 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:30.843 "name": "Existed_Raid", 00:09:30.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.843 "strip_size_kb": 64, 00:09:30.843 "state": "configuring", 00:09:30.843 "raid_level": "raid0", 00:09:30.843 "superblock": false, 00:09:30.843 "num_base_bdevs": 3, 00:09:30.843 "num_base_bdevs_discovered": 1, 00:09:30.843 "num_base_bdevs_operational": 3, 00:09:30.843 "base_bdevs_list": [ 00:09:30.843 { 00:09:30.843 "name": "BaseBdev1", 00:09:30.843 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:30.843 "is_configured": true, 00:09:30.843 "data_offset": 0, 00:09:30.843 "data_size": 65536 00:09:30.843 }, 00:09:30.843 { 00:09:30.843 "name": null, 00:09:30.843 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:30.843 "is_configured": false, 00:09:30.843 "data_offset": 0, 00:09:30.843 "data_size": 65536 00:09:30.843 }, 00:09:30.843 { 00:09:30.843 "name": null, 00:09:30.843 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:30.843 "is_configured": false, 00:09:30.843 "data_offset": 0, 00:09:30.843 "data_size": 65536 00:09:30.843 } 00:09:30.843 ] 00:09:30.843 }' 00:09:30.843 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:30.843 18:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.101 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.101 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.359 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:31.359 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:31.615 [2024-07-15 18:23:23.850050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.615 18:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.873 18:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:31.873 "name": "Existed_Raid", 00:09:31.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.873 "strip_size_kb": 64, 00:09:31.873 "state": "configuring", 00:09:31.873 "raid_level": "raid0", 00:09:31.873 "superblock": false, 00:09:31.873 "num_base_bdevs": 3, 00:09:31.873 "num_base_bdevs_discovered": 2, 00:09:31.873 "num_base_bdevs_operational": 3, 00:09:31.873 "base_bdevs_list": [ 00:09:31.873 { 00:09:31.873 "name": "BaseBdev1", 00:09:31.873 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:31.873 "is_configured": true, 00:09:31.873 "data_offset": 0, 00:09:31.873 "data_size": 65536 00:09:31.873 }, 00:09:31.873 { 00:09:31.873 "name": null, 00:09:31.873 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:31.873 "is_configured": false, 00:09:31.873 "data_offset": 0, 00:09:31.873 "data_size": 65536 00:09:31.873 }, 00:09:31.873 { 00:09:31.873 "name": "BaseBdev3", 00:09:31.873 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:31.873 "is_configured": true, 00:09:31.873 "data_offset": 0, 00:09:31.873 "data_size": 65536 00:09:31.873 } 00:09:31.873 ] 00:09:31.873 }' 00:09:31.873 18:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:31.873 18:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.437 18:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.437 18:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.437 18:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:32.437 18:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:32.695 [2024-07-15 18:23:25.018107] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.695 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.954 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:32.954 "name": "Existed_Raid", 00:09:32.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.954 "strip_size_kb": 64, 00:09:32.954 "state": "configuring", 00:09:32.954 "raid_level": "raid0", 00:09:32.954 "superblock": false, 00:09:32.954 "num_base_bdevs": 3, 00:09:32.954 "num_base_bdevs_discovered": 1, 00:09:32.954 "num_base_bdevs_operational": 3, 00:09:32.954 "base_bdevs_list": [ 00:09:32.954 { 00:09:32.954 "name": null, 00:09:32.954 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:32.954 "is_configured": false, 00:09:32.954 "data_offset": 0, 00:09:32.954 "data_size": 65536 00:09:32.954 }, 00:09:32.954 { 00:09:32.954 "name": null, 00:09:32.954 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:32.954 "is_configured": false, 00:09:32.954 "data_offset": 0, 00:09:32.954 "data_size": 65536 00:09:32.954 }, 00:09:32.954 { 00:09:32.954 "name": "BaseBdev3", 00:09:32.954 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:32.954 "is_configured": true, 00:09:32.954 "data_offset": 0, 00:09:32.954 "data_size": 65536 00:09:32.954 } 00:09:32.954 ] 00:09:32.954 }' 00:09:32.954 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:32.954 18:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.528 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.528 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.528 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:33.528 18:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:33.786 [2024-07-15 18:23:26.162377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.044 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.302 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.302 "name": "Existed_Raid", 00:09:34.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.302 "strip_size_kb": 64, 00:09:34.303 "state": "configuring", 00:09:34.303 "raid_level": "raid0", 00:09:34.303 "superblock": false, 00:09:34.303 "num_base_bdevs": 3, 00:09:34.303 "num_base_bdevs_discovered": 2, 00:09:34.303 "num_base_bdevs_operational": 3, 00:09:34.303 "base_bdevs_list": [ 00:09:34.303 { 00:09:34.303 "name": null, 00:09:34.303 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:34.303 "is_configured": false, 00:09:34.303 "data_offset": 0, 00:09:34.303 "data_size": 65536 00:09:34.303 }, 00:09:34.303 { 00:09:34.303 "name": "BaseBdev2", 00:09:34.303 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:34.303 "is_configured": true, 00:09:34.303 "data_offset": 0, 00:09:34.303 "data_size": 65536 00:09:34.303 }, 00:09:34.303 { 00:09:34.303 "name": "BaseBdev3", 00:09:34.303 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:34.303 "is_configured": true, 00:09:34.303 "data_offset": 0, 00:09:34.303 "data_size": 65536 00:09:34.303 } 00:09:34.303 ] 00:09:34.303 }' 00:09:34.303 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.303 18:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.561 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.561 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.819 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:34.819 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.819 18:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:35.077 18:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 50ad1ddd-42d7-11ef-9ade-d5fc5159efa5 00:09:35.336 [2024-07-15 18:23:27.530573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:35.336 [2024-07-15 18:23:27.530602] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5f3d5234a00 00:09:35.336 [2024-07-15 18:23:27.530606] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:35.336 [2024-07-15 18:23:27.530630] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5f3d5297e20 00:09:35.336 [2024-07-15 18:23:27.530703] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5f3d5234a00 00:09:35.336 [2024-07-15 18:23:27.530708] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x5f3d5234a00 00:09:35.336 [2024-07-15 18:23:27.530741] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.336 NewBaseBdev 00:09:35.336 18:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:35.336 18:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:35.336 18:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:35.336 18:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:35.336 18:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:35.336 18:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:35.336 18:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:35.592 18:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:35.849 [ 00:09:35.849 { 00:09:35.849 "name": "NewBaseBdev", 00:09:35.849 "aliases": [ 00:09:35.849 "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5" 00:09:35.849 ], 00:09:35.849 "product_name": "Malloc disk", 00:09:35.849 "block_size": 512, 00:09:35.849 "num_blocks": 65536, 00:09:35.850 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:35.850 "assigned_rate_limits": { 00:09:35.850 "rw_ios_per_sec": 0, 00:09:35.850 "rw_mbytes_per_sec": 0, 00:09:35.850 "r_mbytes_per_sec": 0, 00:09:35.850 "w_mbytes_per_sec": 0 00:09:35.850 }, 00:09:35.850 "claimed": true, 00:09:35.850 "claim_type": "exclusive_write", 00:09:35.850 "zoned": false, 00:09:35.850 "supported_io_types": { 00:09:35.850 "read": true, 00:09:35.850 "write": true, 00:09:35.850 "unmap": true, 00:09:35.850 "flush": true, 00:09:35.850 "reset": true, 00:09:35.850 "nvme_admin": false, 00:09:35.850 "nvme_io": false, 00:09:35.850 "nvme_io_md": false, 00:09:35.850 "write_zeroes": true, 00:09:35.850 "zcopy": true, 00:09:35.850 "get_zone_info": false, 00:09:35.850 "zone_management": false, 00:09:35.850 "zone_append": false, 00:09:35.850 "compare": false, 00:09:35.850 "compare_and_write": false, 00:09:35.850 "abort": true, 00:09:35.850 "seek_hole": false, 00:09:35.850 "seek_data": false, 00:09:35.850 "copy": true, 00:09:35.850 "nvme_iov_md": false 00:09:35.850 }, 00:09:35.850 "memory_domains": [ 00:09:35.850 { 00:09:35.850 "dma_device_id": "system", 00:09:35.850 "dma_device_type": 1 00:09:35.850 }, 00:09:35.850 { 00:09:35.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.850 "dma_device_type": 2 00:09:35.850 } 00:09:35.850 ], 00:09:35.850 "driver_specific": {} 00:09:35.850 } 00:09:35.850 ] 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.850 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.106 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:36.106 "name": "Existed_Raid", 00:09:36.107 "uuid": "5486f08e-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.107 "strip_size_kb": 64, 00:09:36.107 "state": "online", 00:09:36.107 "raid_level": "raid0", 00:09:36.107 "superblock": false, 00:09:36.107 "num_base_bdevs": 3, 00:09:36.107 "num_base_bdevs_discovered": 3, 00:09:36.107 "num_base_bdevs_operational": 3, 00:09:36.107 "base_bdevs_list": [ 00:09:36.107 { 00:09:36.107 "name": "NewBaseBdev", 00:09:36.107 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.107 "is_configured": true, 00:09:36.107 "data_offset": 0, 00:09:36.107 "data_size": 65536 00:09:36.107 }, 00:09:36.107 { 00:09:36.107 "name": "BaseBdev2", 00:09:36.107 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.107 "is_configured": true, 00:09:36.107 "data_offset": 0, 00:09:36.107 "data_size": 65536 00:09:36.107 }, 00:09:36.107 { 00:09:36.107 "name": "BaseBdev3", 00:09:36.107 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.107 "is_configured": true, 00:09:36.107 "data_offset": 0, 00:09:36.107 "data_size": 65536 00:09:36.107 } 00:09:36.107 ] 00:09:36.107 }' 00:09:36.107 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:36.107 18:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.364 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.364 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:36.364 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:36.364 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:36.364 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:36.364 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:36.364 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:36.364 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:36.621 [2024-07-15 18:23:28.950548] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.621 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:36.621 "name": "Existed_Raid", 00:09:36.621 "aliases": [ 00:09:36.621 "5486f08e-42d7-11ef-9ade-d5fc5159efa5" 00:09:36.621 ], 00:09:36.621 "product_name": "Raid Volume", 00:09:36.621 "block_size": 512, 00:09:36.621 "num_blocks": 196608, 00:09:36.621 "uuid": "5486f08e-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.621 "assigned_rate_limits": { 00:09:36.621 "rw_ios_per_sec": 0, 00:09:36.621 "rw_mbytes_per_sec": 0, 00:09:36.621 "r_mbytes_per_sec": 0, 00:09:36.621 "w_mbytes_per_sec": 0 00:09:36.621 }, 00:09:36.621 "claimed": false, 00:09:36.621 "zoned": false, 00:09:36.621 "supported_io_types": { 00:09:36.621 "read": true, 00:09:36.621 "write": true, 00:09:36.621 "unmap": true, 00:09:36.621 "flush": true, 00:09:36.621 "reset": true, 00:09:36.621 "nvme_admin": false, 00:09:36.621 "nvme_io": false, 00:09:36.621 "nvme_io_md": false, 00:09:36.621 "write_zeroes": true, 00:09:36.621 "zcopy": false, 00:09:36.621 "get_zone_info": false, 00:09:36.621 "zone_management": false, 00:09:36.621 "zone_append": false, 00:09:36.621 "compare": false, 00:09:36.621 "compare_and_write": false, 00:09:36.621 "abort": false, 00:09:36.621 "seek_hole": false, 00:09:36.621 "seek_data": false, 00:09:36.621 "copy": false, 00:09:36.621 "nvme_iov_md": false 00:09:36.621 }, 00:09:36.621 "memory_domains": [ 00:09:36.621 { 00:09:36.621 "dma_device_id": "system", 00:09:36.621 "dma_device_type": 1 00:09:36.621 }, 00:09:36.621 { 00:09:36.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.621 "dma_device_type": 2 00:09:36.621 }, 00:09:36.621 { 00:09:36.621 "dma_device_id": "system", 00:09:36.621 "dma_device_type": 1 00:09:36.621 }, 00:09:36.621 { 00:09:36.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.621 "dma_device_type": 2 00:09:36.621 }, 00:09:36.621 { 00:09:36.621 "dma_device_id": "system", 00:09:36.621 "dma_device_type": 1 00:09:36.621 }, 00:09:36.621 { 00:09:36.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.621 "dma_device_type": 2 00:09:36.621 } 00:09:36.621 ], 00:09:36.621 "driver_specific": { 00:09:36.621 "raid": { 00:09:36.621 "uuid": "5486f08e-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.621 "strip_size_kb": 64, 00:09:36.621 "state": "online", 00:09:36.621 "raid_level": "raid0", 00:09:36.621 "superblock": false, 00:09:36.621 "num_base_bdevs": 3, 00:09:36.621 "num_base_bdevs_discovered": 3, 00:09:36.621 "num_base_bdevs_operational": 3, 00:09:36.621 "base_bdevs_list": [ 00:09:36.621 { 00:09:36.621 "name": "NewBaseBdev", 00:09:36.621 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.621 "is_configured": true, 00:09:36.621 "data_offset": 0, 00:09:36.621 "data_size": 65536 00:09:36.621 }, 00:09:36.621 { 00:09:36.621 "name": "BaseBdev2", 00:09:36.621 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.621 "is_configured": true, 00:09:36.621 "data_offset": 0, 00:09:36.621 "data_size": 65536 00:09:36.621 }, 00:09:36.621 { 00:09:36.621 "name": "BaseBdev3", 00:09:36.621 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:36.621 "is_configured": true, 00:09:36.621 "data_offset": 0, 00:09:36.621 "data_size": 65536 00:09:36.621 } 00:09:36.621 ] 00:09:36.621 } 00:09:36.621 } 00:09:36.621 }' 00:09:36.621 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.621 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:36.621 BaseBdev2 00:09:36.621 BaseBdev3' 00:09:36.621 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:36.621 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:36.621 18:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:37.185 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:37.185 "name": "NewBaseBdev", 00:09:37.185 "aliases": [ 00:09:37.185 "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5" 00:09:37.185 ], 00:09:37.185 "product_name": "Malloc disk", 00:09:37.185 "block_size": 512, 00:09:37.185 "num_blocks": 65536, 00:09:37.185 "uuid": "50ad1ddd-42d7-11ef-9ade-d5fc5159efa5", 00:09:37.185 "assigned_rate_limits": { 00:09:37.185 "rw_ios_per_sec": 0, 00:09:37.185 "rw_mbytes_per_sec": 0, 00:09:37.185 "r_mbytes_per_sec": 0, 00:09:37.185 "w_mbytes_per_sec": 0 00:09:37.185 }, 00:09:37.185 "claimed": true, 00:09:37.185 "claim_type": "exclusive_write", 00:09:37.185 "zoned": false, 00:09:37.185 "supported_io_types": { 00:09:37.186 "read": true, 00:09:37.186 "write": true, 00:09:37.186 "unmap": true, 00:09:37.186 "flush": true, 00:09:37.186 "reset": true, 00:09:37.186 "nvme_admin": false, 00:09:37.186 "nvme_io": false, 00:09:37.186 "nvme_io_md": false, 00:09:37.186 "write_zeroes": true, 00:09:37.186 "zcopy": true, 00:09:37.186 "get_zone_info": false, 00:09:37.186 "zone_management": false, 00:09:37.186 "zone_append": false, 00:09:37.186 "compare": false, 00:09:37.186 "compare_and_write": false, 00:09:37.186 "abort": true, 00:09:37.186 "seek_hole": false, 00:09:37.186 "seek_data": false, 00:09:37.186 "copy": true, 00:09:37.186 "nvme_iov_md": false 00:09:37.186 }, 00:09:37.186 "memory_domains": [ 00:09:37.186 { 00:09:37.186 "dma_device_id": "system", 00:09:37.186 "dma_device_type": 1 00:09:37.186 }, 00:09:37.186 { 00:09:37.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.186 "dma_device_type": 2 00:09:37.186 } 00:09:37.186 ], 00:09:37.186 "driver_specific": {} 00:09:37.186 }' 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:37.186 "name": "BaseBdev2", 00:09:37.186 "aliases": [ 00:09:37.186 "4e9d2441-42d7-11ef-9ade-d5fc5159efa5" 00:09:37.186 ], 00:09:37.186 "product_name": "Malloc disk", 00:09:37.186 "block_size": 512, 00:09:37.186 "num_blocks": 65536, 00:09:37.186 "uuid": "4e9d2441-42d7-11ef-9ade-d5fc5159efa5", 00:09:37.186 "assigned_rate_limits": { 00:09:37.186 "rw_ios_per_sec": 0, 00:09:37.186 "rw_mbytes_per_sec": 0, 00:09:37.186 "r_mbytes_per_sec": 0, 00:09:37.186 "w_mbytes_per_sec": 0 00:09:37.186 }, 00:09:37.186 "claimed": true, 00:09:37.186 "claim_type": "exclusive_write", 00:09:37.186 "zoned": false, 00:09:37.186 "supported_io_types": { 00:09:37.186 "read": true, 00:09:37.186 "write": true, 00:09:37.186 "unmap": true, 00:09:37.186 "flush": true, 00:09:37.186 "reset": true, 00:09:37.186 "nvme_admin": false, 00:09:37.186 "nvme_io": false, 00:09:37.186 "nvme_io_md": false, 00:09:37.186 "write_zeroes": true, 00:09:37.186 "zcopy": true, 00:09:37.186 "get_zone_info": false, 00:09:37.186 "zone_management": false, 00:09:37.186 "zone_append": false, 00:09:37.186 "compare": false, 00:09:37.186 "compare_and_write": false, 00:09:37.186 "abort": true, 00:09:37.186 "seek_hole": false, 00:09:37.186 "seek_data": false, 00:09:37.186 "copy": true, 00:09:37.186 "nvme_iov_md": false 00:09:37.186 }, 00:09:37.186 "memory_domains": [ 00:09:37.186 { 00:09:37.186 "dma_device_id": "system", 00:09:37.186 "dma_device_type": 1 00:09:37.186 }, 00:09:37.186 { 00:09:37.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.186 "dma_device_type": 2 00:09:37.186 } 00:09:37.186 ], 00:09:37.186 "driver_specific": {} 00:09:37.186 }' 00:09:37.186 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:37.500 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:37.757 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:37.757 "name": "BaseBdev3", 00:09:37.757 "aliases": [ 00:09:37.757 "4f19a883-42d7-11ef-9ade-d5fc5159efa5" 00:09:37.757 ], 00:09:37.757 "product_name": "Malloc disk", 00:09:37.757 "block_size": 512, 00:09:37.757 "num_blocks": 65536, 00:09:37.757 "uuid": "4f19a883-42d7-11ef-9ade-d5fc5159efa5", 00:09:37.757 "assigned_rate_limits": { 00:09:37.757 "rw_ios_per_sec": 0, 00:09:37.757 "rw_mbytes_per_sec": 0, 00:09:37.757 "r_mbytes_per_sec": 0, 00:09:37.757 "w_mbytes_per_sec": 0 00:09:37.757 }, 00:09:37.757 "claimed": true, 00:09:37.757 "claim_type": "exclusive_write", 00:09:37.757 "zoned": false, 00:09:37.757 "supported_io_types": { 00:09:37.757 "read": true, 00:09:37.757 "write": true, 00:09:37.757 "unmap": true, 00:09:37.757 "flush": true, 00:09:37.757 "reset": true, 00:09:37.757 "nvme_admin": false, 00:09:37.757 "nvme_io": false, 00:09:37.758 "nvme_io_md": false, 00:09:37.758 "write_zeroes": true, 00:09:37.758 "zcopy": true, 00:09:37.758 "get_zone_info": false, 00:09:37.758 "zone_management": false, 00:09:37.758 "zone_append": false, 00:09:37.758 "compare": false, 00:09:37.758 "compare_and_write": false, 00:09:37.758 "abort": true, 00:09:37.758 "seek_hole": false, 00:09:37.758 "seek_data": false, 00:09:37.758 "copy": true, 00:09:37.758 "nvme_iov_md": false 00:09:37.758 }, 00:09:37.758 "memory_domains": [ 00:09:37.758 { 00:09:37.758 "dma_device_id": "system", 00:09:37.758 "dma_device_type": 1 00:09:37.758 }, 00:09:37.758 { 00:09:37.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.758 "dma_device_type": 2 00:09:37.758 } 00:09:37.758 ], 00:09:37.758 "driver_specific": {} 00:09:37.758 }' 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:37.758 18:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:38.015 [2024-07-15 18:23:30.242575] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.015 [2024-07-15 18:23:30.242600] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.015 [2024-07-15 18:23:30.242651] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.015 [2024-07-15 18:23:30.242665] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.015 [2024-07-15 18:23:30.242669] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5f3d5234a00 name Existed_Raid, state offline 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51976 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 51976 ']' 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 51976 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 51976 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:38.015 killing process with pid 51976 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51976' 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 51976 00:09:38.015 [2024-07-15 18:23:30.272499] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.015 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 51976 00:09:38.015 [2024-07-15 18:23:30.295165] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:38.273 00:09:38.273 real 0m24.166s 00:09:38.273 user 0m43.973s 00:09:38.273 sys 0m3.517s 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 ************************************ 00:09:38.273 END TEST raid_state_function_test 00:09:38.273 ************************************ 00:09:38.273 18:23:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:38.273 18:23:30 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:38.273 18:23:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:38.273 18:23:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.273 18:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 ************************************ 00:09:38.273 START TEST raid_state_function_test_sb 00:09:38.273 ************************************ 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=52705 00:09:38.273 Process raid pid: 52705 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52705' 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 52705 /var/tmp/spdk-raid.sock 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 52705 ']' 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.273 18:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 [2024-07-15 18:23:30.580366] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:09:38.273 [2024-07-15 18:23:30.580650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:38.837 EAL: TSC is not safe to use in SMP mode 00:09:38.837 EAL: TSC is not invariant 00:09:38.837 [2024-07-15 18:23:31.160640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.095 [2024-07-15 18:23:31.270481] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:39.095 [2024-07-15 18:23:31.272630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.095 [2024-07-15 18:23:31.273457] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.095 [2024-07-15 18:23:31.273470] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.353 18:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.353 18:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:39.353 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:39.611 [2024-07-15 18:23:31.937592] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.611 [2024-07-15 18:23:31.937648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.611 [2024-07-15 18:23:31.937653] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.611 [2024-07-15 18:23:31.937662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.611 [2024-07-15 18:23:31.937666] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.611 [2024-07-15 18:23:31.937673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:39.611 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:39.612 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:39.612 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.612 18:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.870 18:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:39.870 "name": "Existed_Raid", 00:09:39.870 "uuid": "572763db-42d7-11ef-9ade-d5fc5159efa5", 00:09:39.870 "strip_size_kb": 64, 00:09:39.870 "state": "configuring", 00:09:39.870 "raid_level": "raid0", 00:09:39.870 "superblock": true, 00:09:39.870 "num_base_bdevs": 3, 00:09:39.870 "num_base_bdevs_discovered": 0, 00:09:39.870 "num_base_bdevs_operational": 3, 00:09:39.870 "base_bdevs_list": [ 00:09:39.870 { 00:09:39.870 "name": "BaseBdev1", 00:09:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.870 "is_configured": false, 00:09:39.870 "data_offset": 0, 00:09:39.870 "data_size": 0 00:09:39.870 }, 00:09:39.870 { 00:09:39.870 "name": "BaseBdev2", 00:09:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.870 "is_configured": false, 00:09:39.870 "data_offset": 0, 00:09:39.870 "data_size": 0 00:09:39.870 }, 00:09:39.870 { 00:09:39.870 "name": "BaseBdev3", 00:09:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.870 "is_configured": false, 00:09:39.870 "data_offset": 0, 00:09:39.870 "data_size": 0 00:09:39.870 } 00:09:39.870 ] 00:09:39.870 }' 00:09:39.870 18:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:39.870 18:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.436 18:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:40.436 [2024-07-15 18:23:32.757606] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.436 [2024-07-15 18:23:32.757638] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x192936834500 name Existed_Raid, state configuring 00:09:40.436 18:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:40.694 [2024-07-15 18:23:32.993626] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.694 [2024-07-15 18:23:32.993693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.694 [2024-07-15 18:23:32.993699] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.695 [2024-07-15 18:23:32.993725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.695 [2024-07-15 18:23:32.993729] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.695 [2024-07-15 18:23:32.993736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.695 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.953 [2024-07-15 18:23:33.270953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.953 BaseBdev1 00:09:40.953 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:40.953 18:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:40.953 18:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:40.953 18:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:40.953 18:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:40.953 18:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:40.953 18:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:41.210 18:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.469 [ 00:09:41.469 { 00:09:41.469 "name": "BaseBdev1", 00:09:41.469 "aliases": [ 00:09:41.469 "57f2a651-42d7-11ef-9ade-d5fc5159efa5" 00:09:41.469 ], 00:09:41.469 "product_name": "Malloc disk", 00:09:41.469 "block_size": 512, 00:09:41.469 "num_blocks": 65536, 00:09:41.469 "uuid": "57f2a651-42d7-11ef-9ade-d5fc5159efa5", 00:09:41.469 "assigned_rate_limits": { 00:09:41.469 "rw_ios_per_sec": 0, 00:09:41.469 "rw_mbytes_per_sec": 0, 00:09:41.469 "r_mbytes_per_sec": 0, 00:09:41.469 "w_mbytes_per_sec": 0 00:09:41.469 }, 00:09:41.469 "claimed": true, 00:09:41.469 "claim_type": "exclusive_write", 00:09:41.469 "zoned": false, 00:09:41.469 "supported_io_types": { 00:09:41.469 "read": true, 00:09:41.469 "write": true, 00:09:41.469 "unmap": true, 00:09:41.469 "flush": true, 00:09:41.469 "reset": true, 00:09:41.469 "nvme_admin": false, 00:09:41.469 "nvme_io": false, 00:09:41.469 "nvme_io_md": false, 00:09:41.469 "write_zeroes": true, 00:09:41.469 "zcopy": true, 00:09:41.469 "get_zone_info": false, 00:09:41.469 "zone_management": false, 00:09:41.469 "zone_append": false, 00:09:41.469 "compare": false, 00:09:41.469 "compare_and_write": false, 00:09:41.469 "abort": true, 00:09:41.469 "seek_hole": false, 00:09:41.469 "seek_data": false, 00:09:41.469 "copy": true, 00:09:41.469 "nvme_iov_md": false 00:09:41.469 }, 00:09:41.469 "memory_domains": [ 00:09:41.469 { 00:09:41.469 "dma_device_id": "system", 00:09:41.469 "dma_device_type": 1 00:09:41.469 }, 00:09:41.469 { 00:09:41.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.469 "dma_device_type": 2 00:09:41.469 } 00:09:41.469 ], 00:09:41.469 "driver_specific": {} 00:09:41.469 } 00:09:41.469 ] 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.469 18:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.728 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:41.728 "name": "Existed_Raid", 00:09:41.728 "uuid": "57c88749-42d7-11ef-9ade-d5fc5159efa5", 00:09:41.728 "strip_size_kb": 64, 00:09:41.728 "state": "configuring", 00:09:41.728 "raid_level": "raid0", 00:09:41.728 "superblock": true, 00:09:41.728 "num_base_bdevs": 3, 00:09:41.728 "num_base_bdevs_discovered": 1, 00:09:41.728 "num_base_bdevs_operational": 3, 00:09:41.728 "base_bdevs_list": [ 00:09:41.728 { 00:09:41.728 "name": "BaseBdev1", 00:09:41.728 "uuid": "57f2a651-42d7-11ef-9ade-d5fc5159efa5", 00:09:41.728 "is_configured": true, 00:09:41.728 "data_offset": 2048, 00:09:41.728 "data_size": 63488 00:09:41.728 }, 00:09:41.728 { 00:09:41.728 "name": "BaseBdev2", 00:09:41.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.728 "is_configured": false, 00:09:41.728 "data_offset": 0, 00:09:41.728 "data_size": 0 00:09:41.728 }, 00:09:41.728 { 00:09:41.728 "name": "BaseBdev3", 00:09:41.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.728 "is_configured": false, 00:09:41.728 "data_offset": 0, 00:09:41.728 "data_size": 0 00:09:41.728 } 00:09:41.728 ] 00:09:41.728 }' 00:09:41.728 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:41.728 18:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.366 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:42.366 [2024-07-15 18:23:34.677705] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.366 [2024-07-15 18:23:34.677741] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x192936834500 name Existed_Raid, state configuring 00:09:42.366 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:42.626 [2024-07-15 18:23:34.909741] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.626 [2024-07-15 18:23:34.910558] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.626 [2024-07-15 18:23:34.910595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.626 [2024-07-15 18:23:34.910601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.626 [2024-07-15 18:23:34.910610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.626 18:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.884 18:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.884 "name": "Existed_Raid", 00:09:42.884 "uuid": "58ece758-42d7-11ef-9ade-d5fc5159efa5", 00:09:42.884 "strip_size_kb": 64, 00:09:42.884 "state": "configuring", 00:09:42.884 "raid_level": "raid0", 00:09:42.884 "superblock": true, 00:09:42.884 "num_base_bdevs": 3, 00:09:42.884 "num_base_bdevs_discovered": 1, 00:09:42.884 "num_base_bdevs_operational": 3, 00:09:42.884 "base_bdevs_list": [ 00:09:42.884 { 00:09:42.884 "name": "BaseBdev1", 00:09:42.884 "uuid": "57f2a651-42d7-11ef-9ade-d5fc5159efa5", 00:09:42.884 "is_configured": true, 00:09:42.884 "data_offset": 2048, 00:09:42.884 "data_size": 63488 00:09:42.884 }, 00:09:42.884 { 00:09:42.884 "name": "BaseBdev2", 00:09:42.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.884 "is_configured": false, 00:09:42.884 "data_offset": 0, 00:09:42.884 "data_size": 0 00:09:42.884 }, 00:09:42.884 { 00:09:42.884 "name": "BaseBdev3", 00:09:42.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.884 "is_configured": false, 00:09:42.884 "data_offset": 0, 00:09:42.884 "data_size": 0 00:09:42.884 } 00:09:42.884 ] 00:09:42.884 }' 00:09:42.884 18:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.885 18:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 18:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.453 [2024-07-15 18:23:35.797924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.453 BaseBdev2 00:09:43.453 18:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:43.453 18:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:43.453 18:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:43.453 18:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:43.453 18:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:43.453 18:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:43.454 18:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:44.022 18:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.022 [ 00:09:44.022 { 00:09:44.022 "name": "BaseBdev2", 00:09:44.022 "aliases": [ 00:09:44.022 "59746939-42d7-11ef-9ade-d5fc5159efa5" 00:09:44.022 ], 00:09:44.022 "product_name": "Malloc disk", 00:09:44.022 "block_size": 512, 00:09:44.022 "num_blocks": 65536, 00:09:44.022 "uuid": "59746939-42d7-11ef-9ade-d5fc5159efa5", 00:09:44.022 "assigned_rate_limits": { 00:09:44.022 "rw_ios_per_sec": 0, 00:09:44.022 "rw_mbytes_per_sec": 0, 00:09:44.022 "r_mbytes_per_sec": 0, 00:09:44.022 "w_mbytes_per_sec": 0 00:09:44.022 }, 00:09:44.022 "claimed": true, 00:09:44.022 "claim_type": "exclusive_write", 00:09:44.022 "zoned": false, 00:09:44.022 "supported_io_types": { 00:09:44.022 "read": true, 00:09:44.022 "write": true, 00:09:44.022 "unmap": true, 00:09:44.022 "flush": true, 00:09:44.022 "reset": true, 00:09:44.022 "nvme_admin": false, 00:09:44.022 "nvme_io": false, 00:09:44.022 "nvme_io_md": false, 00:09:44.022 "write_zeroes": true, 00:09:44.022 "zcopy": true, 00:09:44.022 "get_zone_info": false, 00:09:44.022 "zone_management": false, 00:09:44.022 "zone_append": false, 00:09:44.022 "compare": false, 00:09:44.022 "compare_and_write": false, 00:09:44.022 "abort": true, 00:09:44.022 "seek_hole": false, 00:09:44.022 "seek_data": false, 00:09:44.022 "copy": true, 00:09:44.022 "nvme_iov_md": false 00:09:44.022 }, 00:09:44.022 "memory_domains": [ 00:09:44.022 { 00:09:44.022 "dma_device_id": "system", 00:09:44.022 "dma_device_type": 1 00:09:44.022 }, 00:09:44.022 { 00:09:44.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.022 "dma_device_type": 2 00:09:44.022 } 00:09:44.022 ], 00:09:44.022 "driver_specific": {} 00:09:44.022 } 00:09:44.022 ] 00:09:44.022 18:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.023 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.282 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.282 "name": "Existed_Raid", 00:09:44.282 "uuid": "58ece758-42d7-11ef-9ade-d5fc5159efa5", 00:09:44.282 "strip_size_kb": 64, 00:09:44.282 "state": "configuring", 00:09:44.282 "raid_level": "raid0", 00:09:44.282 "superblock": true, 00:09:44.282 "num_base_bdevs": 3, 00:09:44.282 "num_base_bdevs_discovered": 2, 00:09:44.282 "num_base_bdevs_operational": 3, 00:09:44.282 "base_bdevs_list": [ 00:09:44.282 { 00:09:44.282 "name": "BaseBdev1", 00:09:44.282 "uuid": "57f2a651-42d7-11ef-9ade-d5fc5159efa5", 00:09:44.282 "is_configured": true, 00:09:44.282 "data_offset": 2048, 00:09:44.282 "data_size": 63488 00:09:44.282 }, 00:09:44.282 { 00:09:44.282 "name": "BaseBdev2", 00:09:44.282 "uuid": "59746939-42d7-11ef-9ade-d5fc5159efa5", 00:09:44.282 "is_configured": true, 00:09:44.282 "data_offset": 2048, 00:09:44.282 "data_size": 63488 00:09:44.282 }, 00:09:44.282 { 00:09:44.282 "name": "BaseBdev3", 00:09:44.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.282 "is_configured": false, 00:09:44.282 "data_offset": 0, 00:09:44.282 "data_size": 0 00:09:44.282 } 00:09:44.282 ] 00:09:44.282 }' 00:09:44.282 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.282 18:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.540 18:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:44.799 [2024-07-15 18:23:37.118005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.799 [2024-07-15 18:23:37.118088] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x192936834a00 00:09:44.799 [2024-07-15 18:23:37.118094] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.799 [2024-07-15 18:23:37.118115] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x192936897e20 00:09:44.799 [2024-07-15 18:23:37.118174] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x192936834a00 00:09:44.799 [2024-07-15 18:23:37.118178] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x192936834a00 00:09:44.799 [2024-07-15 18:23:37.118200] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.799 BaseBdev3 00:09:44.799 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:44.799 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:44.799 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:44.799 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:44.799 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:44.799 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:44.799 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:45.058 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.316 [ 00:09:45.316 { 00:09:45.316 "name": "BaseBdev3", 00:09:45.316 "aliases": [ 00:09:45.316 "5a3dd76e-42d7-11ef-9ade-d5fc5159efa5" 00:09:45.316 ], 00:09:45.316 "product_name": "Malloc disk", 00:09:45.316 "block_size": 512, 00:09:45.316 "num_blocks": 65536, 00:09:45.316 "uuid": "5a3dd76e-42d7-11ef-9ade-d5fc5159efa5", 00:09:45.316 "assigned_rate_limits": { 00:09:45.316 "rw_ios_per_sec": 0, 00:09:45.316 "rw_mbytes_per_sec": 0, 00:09:45.316 "r_mbytes_per_sec": 0, 00:09:45.316 "w_mbytes_per_sec": 0 00:09:45.316 }, 00:09:45.316 "claimed": true, 00:09:45.316 "claim_type": "exclusive_write", 00:09:45.316 "zoned": false, 00:09:45.316 "supported_io_types": { 00:09:45.316 "read": true, 00:09:45.316 "write": true, 00:09:45.316 "unmap": true, 00:09:45.316 "flush": true, 00:09:45.316 "reset": true, 00:09:45.316 "nvme_admin": false, 00:09:45.316 "nvme_io": false, 00:09:45.316 "nvme_io_md": false, 00:09:45.316 "write_zeroes": true, 00:09:45.316 "zcopy": true, 00:09:45.316 "get_zone_info": false, 00:09:45.316 "zone_management": false, 00:09:45.316 "zone_append": false, 00:09:45.316 "compare": false, 00:09:45.316 "compare_and_write": false, 00:09:45.316 "abort": true, 00:09:45.316 "seek_hole": false, 00:09:45.316 "seek_data": false, 00:09:45.316 "copy": true, 00:09:45.316 "nvme_iov_md": false 00:09:45.316 }, 00:09:45.316 "memory_domains": [ 00:09:45.316 { 00:09:45.316 "dma_device_id": "system", 00:09:45.316 "dma_device_type": 1 00:09:45.316 }, 00:09:45.316 { 00:09:45.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.316 "dma_device_type": 2 00:09:45.316 } 00:09:45.316 ], 00:09:45.316 "driver_specific": {} 00:09:45.316 } 00:09:45.316 ] 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:45.636 "name": "Existed_Raid", 00:09:45.636 "uuid": "58ece758-42d7-11ef-9ade-d5fc5159efa5", 00:09:45.636 "strip_size_kb": 64, 00:09:45.636 "state": "online", 00:09:45.636 "raid_level": "raid0", 00:09:45.636 "superblock": true, 00:09:45.636 "num_base_bdevs": 3, 00:09:45.636 "num_base_bdevs_discovered": 3, 00:09:45.636 "num_base_bdevs_operational": 3, 00:09:45.636 "base_bdevs_list": [ 00:09:45.636 { 00:09:45.636 "name": "BaseBdev1", 00:09:45.636 "uuid": "57f2a651-42d7-11ef-9ade-d5fc5159efa5", 00:09:45.636 "is_configured": true, 00:09:45.636 "data_offset": 2048, 00:09:45.636 "data_size": 63488 00:09:45.636 }, 00:09:45.636 { 00:09:45.636 "name": "BaseBdev2", 00:09:45.636 "uuid": "59746939-42d7-11ef-9ade-d5fc5159efa5", 00:09:45.636 "is_configured": true, 00:09:45.636 "data_offset": 2048, 00:09:45.636 "data_size": 63488 00:09:45.636 }, 00:09:45.636 { 00:09:45.636 "name": "BaseBdev3", 00:09:45.636 "uuid": "5a3dd76e-42d7-11ef-9ade-d5fc5159efa5", 00:09:45.636 "is_configured": true, 00:09:45.636 "data_offset": 2048, 00:09:45.636 "data_size": 63488 00:09:45.636 } 00:09:45.636 ] 00:09:45.636 }' 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:45.636 18:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.895 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.895 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:45.895 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:45.895 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:45.895 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:45.895 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:45.895 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:45.895 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:46.154 [2024-07-15 18:23:38.513995] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.154 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:46.154 "name": "Existed_Raid", 00:09:46.154 "aliases": [ 00:09:46.154 "58ece758-42d7-11ef-9ade-d5fc5159efa5" 00:09:46.154 ], 00:09:46.154 "product_name": "Raid Volume", 00:09:46.154 "block_size": 512, 00:09:46.154 "num_blocks": 190464, 00:09:46.154 "uuid": "58ece758-42d7-11ef-9ade-d5fc5159efa5", 00:09:46.154 "assigned_rate_limits": { 00:09:46.154 "rw_ios_per_sec": 0, 00:09:46.154 "rw_mbytes_per_sec": 0, 00:09:46.154 "r_mbytes_per_sec": 0, 00:09:46.154 "w_mbytes_per_sec": 0 00:09:46.154 }, 00:09:46.154 "claimed": false, 00:09:46.154 "zoned": false, 00:09:46.154 "supported_io_types": { 00:09:46.154 "read": true, 00:09:46.154 "write": true, 00:09:46.154 "unmap": true, 00:09:46.154 "flush": true, 00:09:46.154 "reset": true, 00:09:46.154 "nvme_admin": false, 00:09:46.154 "nvme_io": false, 00:09:46.154 "nvme_io_md": false, 00:09:46.154 "write_zeroes": true, 00:09:46.154 "zcopy": false, 00:09:46.154 "get_zone_info": false, 00:09:46.154 "zone_management": false, 00:09:46.154 "zone_append": false, 00:09:46.154 "compare": false, 00:09:46.154 "compare_and_write": false, 00:09:46.154 "abort": false, 00:09:46.154 "seek_hole": false, 00:09:46.154 "seek_data": false, 00:09:46.154 "copy": false, 00:09:46.154 "nvme_iov_md": false 00:09:46.154 }, 00:09:46.154 "memory_domains": [ 00:09:46.154 { 00:09:46.154 "dma_device_id": "system", 00:09:46.154 "dma_device_type": 1 00:09:46.154 }, 00:09:46.154 { 00:09:46.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.154 "dma_device_type": 2 00:09:46.154 }, 00:09:46.154 { 00:09:46.154 "dma_device_id": "system", 00:09:46.154 "dma_device_type": 1 00:09:46.154 }, 00:09:46.154 { 00:09:46.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.154 "dma_device_type": 2 00:09:46.154 }, 00:09:46.154 { 00:09:46.154 "dma_device_id": "system", 00:09:46.154 "dma_device_type": 1 00:09:46.154 }, 00:09:46.154 { 00:09:46.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.154 "dma_device_type": 2 00:09:46.154 } 00:09:46.154 ], 00:09:46.154 "driver_specific": { 00:09:46.154 "raid": { 00:09:46.154 "uuid": "58ece758-42d7-11ef-9ade-d5fc5159efa5", 00:09:46.154 "strip_size_kb": 64, 00:09:46.154 "state": "online", 00:09:46.154 "raid_level": "raid0", 00:09:46.154 "superblock": true, 00:09:46.154 "num_base_bdevs": 3, 00:09:46.154 "num_base_bdevs_discovered": 3, 00:09:46.154 "num_base_bdevs_operational": 3, 00:09:46.154 "base_bdevs_list": [ 00:09:46.154 { 00:09:46.154 "name": "BaseBdev1", 00:09:46.154 "uuid": "57f2a651-42d7-11ef-9ade-d5fc5159efa5", 00:09:46.154 "is_configured": true, 00:09:46.154 "data_offset": 2048, 00:09:46.154 "data_size": 63488 00:09:46.154 }, 00:09:46.154 { 00:09:46.154 "name": "BaseBdev2", 00:09:46.154 "uuid": "59746939-42d7-11ef-9ade-d5fc5159efa5", 00:09:46.154 "is_configured": true, 00:09:46.154 "data_offset": 2048, 00:09:46.154 "data_size": 63488 00:09:46.154 }, 00:09:46.154 { 00:09:46.154 "name": "BaseBdev3", 00:09:46.154 "uuid": "5a3dd76e-42d7-11ef-9ade-d5fc5159efa5", 00:09:46.154 "is_configured": true, 00:09:46.154 "data_offset": 2048, 00:09:46.154 "data_size": 63488 00:09:46.154 } 00:09:46.154 ] 00:09:46.154 } 00:09:46.154 } 00:09:46.154 }' 00:09:46.154 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.413 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:46.413 BaseBdev2 00:09:46.413 BaseBdev3' 00:09:46.413 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:46.413 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:46.413 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:46.672 "name": "BaseBdev1", 00:09:46.672 "aliases": [ 00:09:46.672 "57f2a651-42d7-11ef-9ade-d5fc5159efa5" 00:09:46.672 ], 00:09:46.672 "product_name": "Malloc disk", 00:09:46.672 "block_size": 512, 00:09:46.672 "num_blocks": 65536, 00:09:46.672 "uuid": "57f2a651-42d7-11ef-9ade-d5fc5159efa5", 00:09:46.672 "assigned_rate_limits": { 00:09:46.672 "rw_ios_per_sec": 0, 00:09:46.672 "rw_mbytes_per_sec": 0, 00:09:46.672 "r_mbytes_per_sec": 0, 00:09:46.672 "w_mbytes_per_sec": 0 00:09:46.672 }, 00:09:46.672 "claimed": true, 00:09:46.672 "claim_type": "exclusive_write", 00:09:46.672 "zoned": false, 00:09:46.672 "supported_io_types": { 00:09:46.672 "read": true, 00:09:46.672 "write": true, 00:09:46.672 "unmap": true, 00:09:46.672 "flush": true, 00:09:46.672 "reset": true, 00:09:46.672 "nvme_admin": false, 00:09:46.672 "nvme_io": false, 00:09:46.672 "nvme_io_md": false, 00:09:46.672 "write_zeroes": true, 00:09:46.672 "zcopy": true, 00:09:46.672 "get_zone_info": false, 00:09:46.672 "zone_management": false, 00:09:46.672 "zone_append": false, 00:09:46.672 "compare": false, 00:09:46.672 "compare_and_write": false, 00:09:46.672 "abort": true, 00:09:46.672 "seek_hole": false, 00:09:46.672 "seek_data": false, 00:09:46.672 "copy": true, 00:09:46.672 "nvme_iov_md": false 00:09:46.672 }, 00:09:46.672 "memory_domains": [ 00:09:46.672 { 00:09:46.672 "dma_device_id": "system", 00:09:46.672 "dma_device_type": 1 00:09:46.672 }, 00:09:46.672 { 00:09:46.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.672 "dma_device_type": 2 00:09:46.672 } 00:09:46.672 ], 00:09:46.672 "driver_specific": {} 00:09:46.672 }' 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:46.672 18:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:46.931 "name": "BaseBdev2", 00:09:46.931 "aliases": [ 00:09:46.931 "59746939-42d7-11ef-9ade-d5fc5159efa5" 00:09:46.931 ], 00:09:46.931 "product_name": "Malloc disk", 00:09:46.931 "block_size": 512, 00:09:46.931 "num_blocks": 65536, 00:09:46.931 "uuid": "59746939-42d7-11ef-9ade-d5fc5159efa5", 00:09:46.931 "assigned_rate_limits": { 00:09:46.931 "rw_ios_per_sec": 0, 00:09:46.931 "rw_mbytes_per_sec": 0, 00:09:46.931 "r_mbytes_per_sec": 0, 00:09:46.931 "w_mbytes_per_sec": 0 00:09:46.931 }, 00:09:46.931 "claimed": true, 00:09:46.931 "claim_type": "exclusive_write", 00:09:46.931 "zoned": false, 00:09:46.931 "supported_io_types": { 00:09:46.931 "read": true, 00:09:46.931 "write": true, 00:09:46.931 "unmap": true, 00:09:46.931 "flush": true, 00:09:46.931 "reset": true, 00:09:46.931 "nvme_admin": false, 00:09:46.931 "nvme_io": false, 00:09:46.931 "nvme_io_md": false, 00:09:46.931 "write_zeroes": true, 00:09:46.931 "zcopy": true, 00:09:46.931 "get_zone_info": false, 00:09:46.931 "zone_management": false, 00:09:46.931 "zone_append": false, 00:09:46.931 "compare": false, 00:09:46.931 "compare_and_write": false, 00:09:46.931 "abort": true, 00:09:46.931 "seek_hole": false, 00:09:46.931 "seek_data": false, 00:09:46.931 "copy": true, 00:09:46.931 "nvme_iov_md": false 00:09:46.931 }, 00:09:46.931 "memory_domains": [ 00:09:46.931 { 00:09:46.931 "dma_device_id": "system", 00:09:46.931 "dma_device_type": 1 00:09:46.931 }, 00:09:46.931 { 00:09:46.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.931 "dma_device_type": 2 00:09:46.931 } 00:09:46.931 ], 00:09:46.931 "driver_specific": {} 00:09:46.931 }' 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:46.931 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:47.190 "name": "BaseBdev3", 00:09:47.190 "aliases": [ 00:09:47.190 "5a3dd76e-42d7-11ef-9ade-d5fc5159efa5" 00:09:47.190 ], 00:09:47.190 "product_name": "Malloc disk", 00:09:47.190 "block_size": 512, 00:09:47.190 "num_blocks": 65536, 00:09:47.190 "uuid": "5a3dd76e-42d7-11ef-9ade-d5fc5159efa5", 00:09:47.190 "assigned_rate_limits": { 00:09:47.190 "rw_ios_per_sec": 0, 00:09:47.190 "rw_mbytes_per_sec": 0, 00:09:47.190 "r_mbytes_per_sec": 0, 00:09:47.190 "w_mbytes_per_sec": 0 00:09:47.190 }, 00:09:47.190 "claimed": true, 00:09:47.190 "claim_type": "exclusive_write", 00:09:47.190 "zoned": false, 00:09:47.190 "supported_io_types": { 00:09:47.190 "read": true, 00:09:47.190 "write": true, 00:09:47.190 "unmap": true, 00:09:47.190 "flush": true, 00:09:47.190 "reset": true, 00:09:47.190 "nvme_admin": false, 00:09:47.190 "nvme_io": false, 00:09:47.190 "nvme_io_md": false, 00:09:47.190 "write_zeroes": true, 00:09:47.190 "zcopy": true, 00:09:47.190 "get_zone_info": false, 00:09:47.190 "zone_management": false, 00:09:47.190 "zone_append": false, 00:09:47.190 "compare": false, 00:09:47.190 "compare_and_write": false, 00:09:47.190 "abort": true, 00:09:47.190 "seek_hole": false, 00:09:47.190 "seek_data": false, 00:09:47.190 "copy": true, 00:09:47.190 "nvme_iov_md": false 00:09:47.190 }, 00:09:47.190 "memory_domains": [ 00:09:47.190 { 00:09:47.190 "dma_device_id": "system", 00:09:47.190 "dma_device_type": 1 00:09:47.190 }, 00:09:47.190 { 00:09:47.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.190 "dma_device_type": 2 00:09:47.190 } 00:09:47.190 ], 00:09:47.190 "driver_specific": {} 00:09:47.190 }' 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:47.190 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:47.449 [2024-07-15 18:23:39.774038] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.449 [2024-07-15 18:23:39.774074] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.449 [2024-07-15 18:23:39.774089] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.449 18:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.708 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.708 "name": "Existed_Raid", 00:09:47.708 "uuid": "58ece758-42d7-11ef-9ade-d5fc5159efa5", 00:09:47.708 "strip_size_kb": 64, 00:09:47.708 "state": "offline", 00:09:47.708 "raid_level": "raid0", 00:09:47.708 "superblock": true, 00:09:47.708 "num_base_bdevs": 3, 00:09:47.708 "num_base_bdevs_discovered": 2, 00:09:47.708 "num_base_bdevs_operational": 2, 00:09:47.708 "base_bdevs_list": [ 00:09:47.708 { 00:09:47.708 "name": null, 00:09:47.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.708 "is_configured": false, 00:09:47.708 "data_offset": 2048, 00:09:47.708 "data_size": 63488 00:09:47.708 }, 00:09:47.708 { 00:09:47.708 "name": "BaseBdev2", 00:09:47.708 "uuid": "59746939-42d7-11ef-9ade-d5fc5159efa5", 00:09:47.708 "is_configured": true, 00:09:47.708 "data_offset": 2048, 00:09:47.708 "data_size": 63488 00:09:47.708 }, 00:09:47.708 { 00:09:47.708 "name": "BaseBdev3", 00:09:47.708 "uuid": "5a3dd76e-42d7-11ef-9ade-d5fc5159efa5", 00:09:47.708 "is_configured": true, 00:09:47.708 "data_offset": 2048, 00:09:47.708 "data_size": 63488 00:09:47.708 } 00:09:47.708 ] 00:09:47.708 }' 00:09:47.708 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.708 18:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.276 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:48.276 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:48.276 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:48.276 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.534 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:48.534 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.534 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:48.534 [2024-07-15 18:23:40.883903] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.534 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:48.534 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:48.534 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:48.534 18:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.102 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:49.102 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.102 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:49.102 [2024-07-15 18:23:41.412020] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.102 [2024-07-15 18:23:41.412054] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x192936834a00 name Existed_Raid, state offline 00:09:49.102 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:49.102 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:49.102 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.102 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.360 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:49.360 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:49.360 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:49.360 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:49.360 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:49.360 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.618 BaseBdev2 00:09:49.618 18:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:49.618 18:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:49.618 18:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:49.618 18:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:49.618 18:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:49.618 18:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:49.618 18:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:49.877 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:50.136 [ 00:09:50.136 { 00:09:50.136 "name": "BaseBdev2", 00:09:50.136 "aliases": [ 00:09:50.136 "5d13e6db-42d7-11ef-9ade-d5fc5159efa5" 00:09:50.136 ], 00:09:50.136 "product_name": "Malloc disk", 00:09:50.136 "block_size": 512, 00:09:50.136 "num_blocks": 65536, 00:09:50.136 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:09:50.136 "assigned_rate_limits": { 00:09:50.136 "rw_ios_per_sec": 0, 00:09:50.136 "rw_mbytes_per_sec": 0, 00:09:50.136 "r_mbytes_per_sec": 0, 00:09:50.136 "w_mbytes_per_sec": 0 00:09:50.136 }, 00:09:50.136 "claimed": false, 00:09:50.136 "zoned": false, 00:09:50.136 "supported_io_types": { 00:09:50.136 "read": true, 00:09:50.136 "write": true, 00:09:50.136 "unmap": true, 00:09:50.136 "flush": true, 00:09:50.136 "reset": true, 00:09:50.136 "nvme_admin": false, 00:09:50.136 "nvme_io": false, 00:09:50.136 "nvme_io_md": false, 00:09:50.136 "write_zeroes": true, 00:09:50.136 "zcopy": true, 00:09:50.136 "get_zone_info": false, 00:09:50.136 "zone_management": false, 00:09:50.136 "zone_append": false, 00:09:50.136 "compare": false, 00:09:50.136 "compare_and_write": false, 00:09:50.136 "abort": true, 00:09:50.136 "seek_hole": false, 00:09:50.136 "seek_data": false, 00:09:50.136 "copy": true, 00:09:50.136 "nvme_iov_md": false 00:09:50.136 }, 00:09:50.136 "memory_domains": [ 00:09:50.136 { 00:09:50.136 "dma_device_id": "system", 00:09:50.136 "dma_device_type": 1 00:09:50.136 }, 00:09:50.136 { 00:09:50.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.136 "dma_device_type": 2 00:09:50.136 } 00:09:50.136 ], 00:09:50.136 "driver_specific": {} 00:09:50.136 } 00:09:50.136 ] 00:09:50.136 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:50.136 18:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:50.136 18:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:50.136 18:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.394 BaseBdev3 00:09:50.394 18:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:50.394 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:50.394 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:50.394 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:50.394 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:50.394 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:50.394 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:50.653 18:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.911 [ 00:09:50.911 { 00:09:50.911 "name": "BaseBdev3", 00:09:50.911 "aliases": [ 00:09:50.911 "5d8ae975-42d7-11ef-9ade-d5fc5159efa5" 00:09:50.911 ], 00:09:50.911 "product_name": "Malloc disk", 00:09:50.911 "block_size": 512, 00:09:50.911 "num_blocks": 65536, 00:09:50.911 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:09:50.911 "assigned_rate_limits": { 00:09:50.911 "rw_ios_per_sec": 0, 00:09:50.911 "rw_mbytes_per_sec": 0, 00:09:50.911 "r_mbytes_per_sec": 0, 00:09:50.911 "w_mbytes_per_sec": 0 00:09:50.911 }, 00:09:50.911 "claimed": false, 00:09:50.911 "zoned": false, 00:09:50.911 "supported_io_types": { 00:09:50.911 "read": true, 00:09:50.911 "write": true, 00:09:50.911 "unmap": true, 00:09:50.911 "flush": true, 00:09:50.911 "reset": true, 00:09:50.911 "nvme_admin": false, 00:09:50.911 "nvme_io": false, 00:09:50.911 "nvme_io_md": false, 00:09:50.911 "write_zeroes": true, 00:09:50.911 "zcopy": true, 00:09:50.911 "get_zone_info": false, 00:09:50.911 "zone_management": false, 00:09:50.911 "zone_append": false, 00:09:50.911 "compare": false, 00:09:50.911 "compare_and_write": false, 00:09:50.911 "abort": true, 00:09:50.911 "seek_hole": false, 00:09:50.911 "seek_data": false, 00:09:50.911 "copy": true, 00:09:50.911 "nvme_iov_md": false 00:09:50.911 }, 00:09:50.911 "memory_domains": [ 00:09:50.911 { 00:09:50.911 "dma_device_id": "system", 00:09:50.911 "dma_device_type": 1 00:09:50.911 }, 00:09:50.911 { 00:09:50.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.911 "dma_device_type": 2 00:09:50.911 } 00:09:50.911 ], 00:09:50.911 "driver_specific": {} 00:09:50.911 } 00:09:50.911 ] 00:09:50.911 18:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:50.911 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:50.911 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:50.911 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:51.169 [2024-07-15 18:23:43.484096] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.169 [2024-07-15 18:23:43.484150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.169 [2024-07-15 18:23:43.484159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.169 [2024-07-15 18:23:43.484721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.169 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.427 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.427 "name": "Existed_Raid", 00:09:51.427 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:09:51.427 "strip_size_kb": 64, 00:09:51.427 "state": "configuring", 00:09:51.427 "raid_level": "raid0", 00:09:51.427 "superblock": true, 00:09:51.427 "num_base_bdevs": 3, 00:09:51.427 "num_base_bdevs_discovered": 2, 00:09:51.427 "num_base_bdevs_operational": 3, 00:09:51.427 "base_bdevs_list": [ 00:09:51.427 { 00:09:51.427 "name": "BaseBdev1", 00:09:51.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.427 "is_configured": false, 00:09:51.427 "data_offset": 0, 00:09:51.427 "data_size": 0 00:09:51.427 }, 00:09:51.427 { 00:09:51.427 "name": "BaseBdev2", 00:09:51.427 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:09:51.427 "is_configured": true, 00:09:51.427 "data_offset": 2048, 00:09:51.427 "data_size": 63488 00:09:51.427 }, 00:09:51.427 { 00:09:51.427 "name": "BaseBdev3", 00:09:51.427 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:09:51.427 "is_configured": true, 00:09:51.427 "data_offset": 2048, 00:09:51.427 "data_size": 63488 00:09:51.427 } 00:09:51.427 ] 00:09:51.427 }' 00:09:51.427 18:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.427 18:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.994 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:51.994 [2024-07-15 18:23:44.376039] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.253 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.512 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:52.512 "name": "Existed_Raid", 00:09:52.512 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:09:52.512 "strip_size_kb": 64, 00:09:52.512 "state": "configuring", 00:09:52.512 "raid_level": "raid0", 00:09:52.512 "superblock": true, 00:09:52.512 "num_base_bdevs": 3, 00:09:52.512 "num_base_bdevs_discovered": 1, 00:09:52.512 "num_base_bdevs_operational": 3, 00:09:52.512 "base_bdevs_list": [ 00:09:52.512 { 00:09:52.512 "name": "BaseBdev1", 00:09:52.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.512 "is_configured": false, 00:09:52.512 "data_offset": 0, 00:09:52.512 "data_size": 0 00:09:52.512 }, 00:09:52.512 { 00:09:52.512 "name": null, 00:09:52.512 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:09:52.512 "is_configured": false, 00:09:52.512 "data_offset": 2048, 00:09:52.512 "data_size": 63488 00:09:52.512 }, 00:09:52.512 { 00:09:52.512 "name": "BaseBdev3", 00:09:52.512 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:09:52.512 "is_configured": true, 00:09:52.512 "data_offset": 2048, 00:09:52.512 "data_size": 63488 00:09:52.512 } 00:09:52.512 ] 00:09:52.512 }' 00:09:52.512 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:52.512 18:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.812 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.812 18:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.812 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:52.812 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.070 [2024-07-15 18:23:45.392123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.070 BaseBdev1 00:09:53.070 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:53.070 18:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:53.070 18:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:53.070 18:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:53.070 18:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:53.070 18:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:53.070 18:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:53.328 18:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.587 [ 00:09:53.587 { 00:09:53.587 "name": "BaseBdev1", 00:09:53.587 "aliases": [ 00:09:53.587 "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5" 00:09:53.587 ], 00:09:53.587 "product_name": "Malloc disk", 00:09:53.587 "block_size": 512, 00:09:53.587 "num_blocks": 65536, 00:09:53.587 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:09:53.587 "assigned_rate_limits": { 00:09:53.587 "rw_ios_per_sec": 0, 00:09:53.587 "rw_mbytes_per_sec": 0, 00:09:53.587 "r_mbytes_per_sec": 0, 00:09:53.587 "w_mbytes_per_sec": 0 00:09:53.587 }, 00:09:53.587 "claimed": true, 00:09:53.587 "claim_type": "exclusive_write", 00:09:53.587 "zoned": false, 00:09:53.587 "supported_io_types": { 00:09:53.587 "read": true, 00:09:53.587 "write": true, 00:09:53.587 "unmap": true, 00:09:53.587 "flush": true, 00:09:53.587 "reset": true, 00:09:53.587 "nvme_admin": false, 00:09:53.587 "nvme_io": false, 00:09:53.587 "nvme_io_md": false, 00:09:53.587 "write_zeroes": true, 00:09:53.587 "zcopy": true, 00:09:53.587 "get_zone_info": false, 00:09:53.587 "zone_management": false, 00:09:53.587 "zone_append": false, 00:09:53.587 "compare": false, 00:09:53.587 "compare_and_write": false, 00:09:53.587 "abort": true, 00:09:53.587 "seek_hole": false, 00:09:53.587 "seek_data": false, 00:09:53.587 "copy": true, 00:09:53.587 "nvme_iov_md": false 00:09:53.587 }, 00:09:53.587 "memory_domains": [ 00:09:53.587 { 00:09:53.587 "dma_device_id": "system", 00:09:53.587 "dma_device_type": 1 00:09:53.587 }, 00:09:53.587 { 00:09:53.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.587 "dma_device_type": 2 00:09:53.588 } 00:09:53.588 ], 00:09:53.588 "driver_specific": {} 00:09:53.588 } 00:09:53.588 ] 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.588 18:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.847 18:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.847 "name": "Existed_Raid", 00:09:53.847 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:09:53.847 "strip_size_kb": 64, 00:09:53.847 "state": "configuring", 00:09:53.847 "raid_level": "raid0", 00:09:53.847 "superblock": true, 00:09:53.847 "num_base_bdevs": 3, 00:09:53.847 "num_base_bdevs_discovered": 2, 00:09:53.847 "num_base_bdevs_operational": 3, 00:09:53.847 "base_bdevs_list": [ 00:09:53.847 { 00:09:53.847 "name": "BaseBdev1", 00:09:53.847 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:09:53.847 "is_configured": true, 00:09:53.847 "data_offset": 2048, 00:09:53.847 "data_size": 63488 00:09:53.847 }, 00:09:53.847 { 00:09:53.847 "name": null, 00:09:53.847 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:09:53.847 "is_configured": false, 00:09:53.847 "data_offset": 2048, 00:09:53.847 "data_size": 63488 00:09:53.847 }, 00:09:53.847 { 00:09:53.847 "name": "BaseBdev3", 00:09:53.847 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:09:53.847 "is_configured": true, 00:09:53.847 "data_offset": 2048, 00:09:53.847 "data_size": 63488 00:09:53.847 } 00:09:53.847 ] 00:09:53.847 }' 00:09:53.847 18:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.847 18:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.414 18:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.414 18:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.414 18:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:54.414 18:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:54.673 [2024-07-15 18:23:47.015926] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.673 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.932 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:54.932 "name": "Existed_Raid", 00:09:54.932 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:09:54.932 "strip_size_kb": 64, 00:09:54.932 "state": "configuring", 00:09:54.932 "raid_level": "raid0", 00:09:54.932 "superblock": true, 00:09:54.932 "num_base_bdevs": 3, 00:09:54.932 "num_base_bdevs_discovered": 1, 00:09:54.932 "num_base_bdevs_operational": 3, 00:09:54.932 "base_bdevs_list": [ 00:09:54.932 { 00:09:54.932 "name": "BaseBdev1", 00:09:54.932 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:09:54.932 "is_configured": true, 00:09:54.932 "data_offset": 2048, 00:09:54.932 "data_size": 63488 00:09:54.932 }, 00:09:54.932 { 00:09:54.932 "name": null, 00:09:54.932 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:09:54.932 "is_configured": false, 00:09:54.932 "data_offset": 2048, 00:09:54.932 "data_size": 63488 00:09:54.932 }, 00:09:54.932 { 00:09:54.932 "name": null, 00:09:54.932 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:09:54.932 "is_configured": false, 00:09:54.932 "data_offset": 2048, 00:09:54.932 "data_size": 63488 00:09:54.932 } 00:09:54.932 ] 00:09:54.932 }' 00:09:54.932 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:54.932 18:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.499 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.499 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:55.499 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:55.499 18:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:55.757 [2024-07-15 18:23:48.127892] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.016 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.274 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:56.274 "name": "Existed_Raid", 00:09:56.274 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:09:56.274 "strip_size_kb": 64, 00:09:56.274 "state": "configuring", 00:09:56.274 "raid_level": "raid0", 00:09:56.274 "superblock": true, 00:09:56.274 "num_base_bdevs": 3, 00:09:56.274 "num_base_bdevs_discovered": 2, 00:09:56.274 "num_base_bdevs_operational": 3, 00:09:56.274 "base_bdevs_list": [ 00:09:56.274 { 00:09:56.274 "name": "BaseBdev1", 00:09:56.274 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:09:56.274 "is_configured": true, 00:09:56.274 "data_offset": 2048, 00:09:56.274 "data_size": 63488 00:09:56.274 }, 00:09:56.274 { 00:09:56.274 "name": null, 00:09:56.274 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:09:56.274 "is_configured": false, 00:09:56.274 "data_offset": 2048, 00:09:56.274 "data_size": 63488 00:09:56.274 }, 00:09:56.274 { 00:09:56.274 "name": "BaseBdev3", 00:09:56.274 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:09:56.274 "is_configured": true, 00:09:56.274 "data_offset": 2048, 00:09:56.274 "data_size": 63488 00:09:56.274 } 00:09:56.274 ] 00:09:56.274 }' 00:09:56.274 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:56.274 18:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.597 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.597 18:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.855 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:56.855 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:57.114 [2024-07-15 18:23:49.307857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.114 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.374 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:57.374 "name": "Existed_Raid", 00:09:57.374 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:09:57.374 "strip_size_kb": 64, 00:09:57.374 "state": "configuring", 00:09:57.374 "raid_level": "raid0", 00:09:57.374 "superblock": true, 00:09:57.374 "num_base_bdevs": 3, 00:09:57.374 "num_base_bdevs_discovered": 1, 00:09:57.374 "num_base_bdevs_operational": 3, 00:09:57.374 "base_bdevs_list": [ 00:09:57.374 { 00:09:57.374 "name": null, 00:09:57.374 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:09:57.374 "is_configured": false, 00:09:57.374 "data_offset": 2048, 00:09:57.374 "data_size": 63488 00:09:57.374 }, 00:09:57.374 { 00:09:57.374 "name": null, 00:09:57.374 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:09:57.374 "is_configured": false, 00:09:57.374 "data_offset": 2048, 00:09:57.374 "data_size": 63488 00:09:57.374 }, 00:09:57.374 { 00:09:57.374 "name": "BaseBdev3", 00:09:57.374 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:09:57.374 "is_configured": true, 00:09:57.374 "data_offset": 2048, 00:09:57.374 "data_size": 63488 00:09:57.374 } 00:09:57.374 ] 00:09:57.374 }' 00:09:57.374 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:57.374 18:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.633 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.633 18:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.891 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:57.891 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:58.150 [2024-07-15 18:23:50.411985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.150 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.409 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:58.409 "name": "Existed_Raid", 00:09:58.409 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:09:58.409 "strip_size_kb": 64, 00:09:58.409 "state": "configuring", 00:09:58.409 "raid_level": "raid0", 00:09:58.409 "superblock": true, 00:09:58.409 "num_base_bdevs": 3, 00:09:58.409 "num_base_bdevs_discovered": 2, 00:09:58.409 "num_base_bdevs_operational": 3, 00:09:58.409 "base_bdevs_list": [ 00:09:58.409 { 00:09:58.409 "name": null, 00:09:58.409 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:09:58.409 "is_configured": false, 00:09:58.409 "data_offset": 2048, 00:09:58.409 "data_size": 63488 00:09:58.409 }, 00:09:58.409 { 00:09:58.409 "name": "BaseBdev2", 00:09:58.409 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:09:58.409 "is_configured": true, 00:09:58.409 "data_offset": 2048, 00:09:58.409 "data_size": 63488 00:09:58.409 }, 00:09:58.409 { 00:09:58.409 "name": "BaseBdev3", 00:09:58.409 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:09:58.409 "is_configured": true, 00:09:58.409 "data_offset": 2048, 00:09:58.409 "data_size": 63488 00:09:58.409 } 00:09:58.409 ] 00:09:58.409 }' 00:09:58.409 18:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:58.409 18:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.668 18:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.668 18:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.925 18:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:58.925 18:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.925 18:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:59.183 18:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5 00:09:59.440 [2024-07-15 18:23:51.780093] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:59.440 [2024-07-15 18:23:51.780148] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x192936834a00 00:09:59.440 [2024-07-15 18:23:51.780154] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:59.440 [2024-07-15 18:23:51.780174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x192936897e20 00:09:59.440 [2024-07-15 18:23:51.780222] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x192936834a00 00:09:59.440 [2024-07-15 18:23:51.780226] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x192936834a00 00:09:59.440 [2024-07-15 18:23:51.780247] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.440 NewBaseBdev 00:09:59.440 18:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:59.440 18:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:59.440 18:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:59.440 18:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:59.440 18:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:59.441 18:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:59.441 18:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:59.699 18:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:59.957 [ 00:09:59.957 { 00:09:59.957 "name": "NewBaseBdev", 00:09:59.957 "aliases": [ 00:09:59.957 "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5" 00:09:59.957 ], 00:09:59.957 "product_name": "Malloc disk", 00:09:59.957 "block_size": 512, 00:09:59.957 "num_blocks": 65536, 00:09:59.957 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:09:59.957 "assigned_rate_limits": { 00:09:59.957 "rw_ios_per_sec": 0, 00:09:59.957 "rw_mbytes_per_sec": 0, 00:09:59.957 "r_mbytes_per_sec": 0, 00:09:59.957 "w_mbytes_per_sec": 0 00:09:59.957 }, 00:09:59.957 "claimed": true, 00:09:59.957 "claim_type": "exclusive_write", 00:09:59.957 "zoned": false, 00:09:59.957 "supported_io_types": { 00:09:59.957 "read": true, 00:09:59.957 "write": true, 00:09:59.957 "unmap": true, 00:09:59.957 "flush": true, 00:09:59.957 "reset": true, 00:09:59.957 "nvme_admin": false, 00:09:59.957 "nvme_io": false, 00:09:59.957 "nvme_io_md": false, 00:09:59.957 "write_zeroes": true, 00:09:59.957 "zcopy": true, 00:09:59.957 "get_zone_info": false, 00:09:59.957 "zone_management": false, 00:09:59.957 "zone_append": false, 00:09:59.957 "compare": false, 00:09:59.957 "compare_and_write": false, 00:09:59.957 "abort": true, 00:09:59.957 "seek_hole": false, 00:09:59.957 "seek_data": false, 00:09:59.957 "copy": true, 00:09:59.957 "nvme_iov_md": false 00:09:59.957 }, 00:09:59.957 "memory_domains": [ 00:09:59.957 { 00:09:59.957 "dma_device_id": "system", 00:09:59.957 "dma_device_type": 1 00:09:59.957 }, 00:09:59.957 { 00:09:59.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.957 "dma_device_type": 2 00:09:59.957 } 00:09:59.957 ], 00:09:59.957 "driver_specific": {} 00:09:59.957 } 00:09:59.957 ] 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.215 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.473 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:00.473 "name": "Existed_Raid", 00:10:00.473 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:10:00.473 "strip_size_kb": 64, 00:10:00.473 "state": "online", 00:10:00.473 "raid_level": "raid0", 00:10:00.473 "superblock": true, 00:10:00.473 "num_base_bdevs": 3, 00:10:00.473 "num_base_bdevs_discovered": 3, 00:10:00.473 "num_base_bdevs_operational": 3, 00:10:00.473 "base_bdevs_list": [ 00:10:00.473 { 00:10:00.473 "name": "NewBaseBdev", 00:10:00.473 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:10:00.473 "is_configured": true, 00:10:00.473 "data_offset": 2048, 00:10:00.473 "data_size": 63488 00:10:00.473 }, 00:10:00.473 { 00:10:00.473 "name": "BaseBdev2", 00:10:00.473 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:10:00.473 "is_configured": true, 00:10:00.473 "data_offset": 2048, 00:10:00.473 "data_size": 63488 00:10:00.473 }, 00:10:00.473 { 00:10:00.473 "name": "BaseBdev3", 00:10:00.473 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:10:00.473 "is_configured": true, 00:10:00.473 "data_offset": 2048, 00:10:00.473 "data_size": 63488 00:10:00.473 } 00:10:00.473 ] 00:10:00.474 }' 00:10:00.474 18:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:00.474 18:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.888 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.888 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:00.888 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:00.888 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:00.888 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:00.888 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:00.888 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:00.888 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:01.146 [2024-07-15 18:23:53.272050] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.146 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:01.146 "name": "Existed_Raid", 00:10:01.146 "aliases": [ 00:10:01.146 "5e093f3a-42d7-11ef-9ade-d5fc5159efa5" 00:10:01.146 ], 00:10:01.146 "product_name": "Raid Volume", 00:10:01.146 "block_size": 512, 00:10:01.146 "num_blocks": 190464, 00:10:01.146 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:10:01.146 "assigned_rate_limits": { 00:10:01.146 "rw_ios_per_sec": 0, 00:10:01.146 "rw_mbytes_per_sec": 0, 00:10:01.146 "r_mbytes_per_sec": 0, 00:10:01.146 "w_mbytes_per_sec": 0 00:10:01.146 }, 00:10:01.146 "claimed": false, 00:10:01.146 "zoned": false, 00:10:01.146 "supported_io_types": { 00:10:01.146 "read": true, 00:10:01.146 "write": true, 00:10:01.146 "unmap": true, 00:10:01.146 "flush": true, 00:10:01.146 "reset": true, 00:10:01.146 "nvme_admin": false, 00:10:01.146 "nvme_io": false, 00:10:01.146 "nvme_io_md": false, 00:10:01.146 "write_zeroes": true, 00:10:01.146 "zcopy": false, 00:10:01.146 "get_zone_info": false, 00:10:01.146 "zone_management": false, 00:10:01.146 "zone_append": false, 00:10:01.146 "compare": false, 00:10:01.146 "compare_and_write": false, 00:10:01.146 "abort": false, 00:10:01.146 "seek_hole": false, 00:10:01.146 "seek_data": false, 00:10:01.146 "copy": false, 00:10:01.146 "nvme_iov_md": false 00:10:01.146 }, 00:10:01.146 "memory_domains": [ 00:10:01.146 { 00:10:01.146 "dma_device_id": "system", 00:10:01.146 "dma_device_type": 1 00:10:01.146 }, 00:10:01.146 { 00:10:01.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.146 "dma_device_type": 2 00:10:01.146 }, 00:10:01.146 { 00:10:01.146 "dma_device_id": "system", 00:10:01.146 "dma_device_type": 1 00:10:01.146 }, 00:10:01.146 { 00:10:01.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.146 "dma_device_type": 2 00:10:01.146 }, 00:10:01.146 { 00:10:01.146 "dma_device_id": "system", 00:10:01.146 "dma_device_type": 1 00:10:01.146 }, 00:10:01.146 { 00:10:01.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.146 "dma_device_type": 2 00:10:01.146 } 00:10:01.146 ], 00:10:01.146 "driver_specific": { 00:10:01.146 "raid": { 00:10:01.146 "uuid": "5e093f3a-42d7-11ef-9ade-d5fc5159efa5", 00:10:01.146 "strip_size_kb": 64, 00:10:01.146 "state": "online", 00:10:01.146 "raid_level": "raid0", 00:10:01.146 "superblock": true, 00:10:01.146 "num_base_bdevs": 3, 00:10:01.146 "num_base_bdevs_discovered": 3, 00:10:01.146 "num_base_bdevs_operational": 3, 00:10:01.146 "base_bdevs_list": [ 00:10:01.146 { 00:10:01.146 "name": "NewBaseBdev", 00:10:01.146 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:10:01.146 "is_configured": true, 00:10:01.146 "data_offset": 2048, 00:10:01.146 "data_size": 63488 00:10:01.146 }, 00:10:01.146 { 00:10:01.146 "name": "BaseBdev2", 00:10:01.146 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:10:01.146 "is_configured": true, 00:10:01.146 "data_offset": 2048, 00:10:01.146 "data_size": 63488 00:10:01.146 }, 00:10:01.146 { 00:10:01.146 "name": "BaseBdev3", 00:10:01.146 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:10:01.146 "is_configured": true, 00:10:01.146 "data_offset": 2048, 00:10:01.146 "data_size": 63488 00:10:01.146 } 00:10:01.146 ] 00:10:01.146 } 00:10:01.146 } 00:10:01.146 }' 00:10:01.146 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.146 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:01.146 BaseBdev2 00:10:01.146 BaseBdev3' 00:10:01.146 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:01.146 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:01.146 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:01.404 "name": "NewBaseBdev", 00:10:01.404 "aliases": [ 00:10:01.404 "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5" 00:10:01.404 ], 00:10:01.404 "product_name": "Malloc disk", 00:10:01.404 "block_size": 512, 00:10:01.404 "num_blocks": 65536, 00:10:01.404 "uuid": "5f2c5f0e-42d7-11ef-9ade-d5fc5159efa5", 00:10:01.404 "assigned_rate_limits": { 00:10:01.404 "rw_ios_per_sec": 0, 00:10:01.404 "rw_mbytes_per_sec": 0, 00:10:01.404 "r_mbytes_per_sec": 0, 00:10:01.404 "w_mbytes_per_sec": 0 00:10:01.404 }, 00:10:01.404 "claimed": true, 00:10:01.404 "claim_type": "exclusive_write", 00:10:01.404 "zoned": false, 00:10:01.404 "supported_io_types": { 00:10:01.404 "read": true, 00:10:01.404 "write": true, 00:10:01.404 "unmap": true, 00:10:01.404 "flush": true, 00:10:01.404 "reset": true, 00:10:01.404 "nvme_admin": false, 00:10:01.404 "nvme_io": false, 00:10:01.404 "nvme_io_md": false, 00:10:01.404 "write_zeroes": true, 00:10:01.404 "zcopy": true, 00:10:01.404 "get_zone_info": false, 00:10:01.404 "zone_management": false, 00:10:01.404 "zone_append": false, 00:10:01.404 "compare": false, 00:10:01.404 "compare_and_write": false, 00:10:01.404 "abort": true, 00:10:01.404 "seek_hole": false, 00:10:01.404 "seek_data": false, 00:10:01.404 "copy": true, 00:10:01.404 "nvme_iov_md": false 00:10:01.404 }, 00:10:01.404 "memory_domains": [ 00:10:01.404 { 00:10:01.404 "dma_device_id": "system", 00:10:01.404 "dma_device_type": 1 00:10:01.404 }, 00:10:01.404 { 00:10:01.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.404 "dma_device_type": 2 00:10:01.404 } 00:10:01.404 ], 00:10:01.404 "driver_specific": {} 00:10:01.404 }' 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:01.404 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:01.661 "name": "BaseBdev2", 00:10:01.661 "aliases": [ 00:10:01.661 "5d13e6db-42d7-11ef-9ade-d5fc5159efa5" 00:10:01.661 ], 00:10:01.661 "product_name": "Malloc disk", 00:10:01.661 "block_size": 512, 00:10:01.661 "num_blocks": 65536, 00:10:01.661 "uuid": "5d13e6db-42d7-11ef-9ade-d5fc5159efa5", 00:10:01.661 "assigned_rate_limits": { 00:10:01.661 "rw_ios_per_sec": 0, 00:10:01.661 "rw_mbytes_per_sec": 0, 00:10:01.661 "r_mbytes_per_sec": 0, 00:10:01.661 "w_mbytes_per_sec": 0 00:10:01.661 }, 00:10:01.661 "claimed": true, 00:10:01.661 "claim_type": "exclusive_write", 00:10:01.661 "zoned": false, 00:10:01.661 "supported_io_types": { 00:10:01.661 "read": true, 00:10:01.661 "write": true, 00:10:01.661 "unmap": true, 00:10:01.661 "flush": true, 00:10:01.661 "reset": true, 00:10:01.661 "nvme_admin": false, 00:10:01.661 "nvme_io": false, 00:10:01.661 "nvme_io_md": false, 00:10:01.661 "write_zeroes": true, 00:10:01.661 "zcopy": true, 00:10:01.661 "get_zone_info": false, 00:10:01.661 "zone_management": false, 00:10:01.661 "zone_append": false, 00:10:01.661 "compare": false, 00:10:01.661 "compare_and_write": false, 00:10:01.661 "abort": true, 00:10:01.661 "seek_hole": false, 00:10:01.661 "seek_data": false, 00:10:01.661 "copy": true, 00:10:01.661 "nvme_iov_md": false 00:10:01.661 }, 00:10:01.661 "memory_domains": [ 00:10:01.661 { 00:10:01.661 "dma_device_id": "system", 00:10:01.661 "dma_device_type": 1 00:10:01.661 }, 00:10:01.661 { 00:10:01.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.661 "dma_device_type": 2 00:10:01.661 } 00:10:01.661 ], 00:10:01.661 "driver_specific": {} 00:10:01.661 }' 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:01.661 18:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:01.918 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:01.918 "name": "BaseBdev3", 00:10:01.918 "aliases": [ 00:10:01.918 "5d8ae975-42d7-11ef-9ade-d5fc5159efa5" 00:10:01.918 ], 00:10:01.918 "product_name": "Malloc disk", 00:10:01.918 "block_size": 512, 00:10:01.918 "num_blocks": 65536, 00:10:01.918 "uuid": "5d8ae975-42d7-11ef-9ade-d5fc5159efa5", 00:10:01.918 "assigned_rate_limits": { 00:10:01.918 "rw_ios_per_sec": 0, 00:10:01.918 "rw_mbytes_per_sec": 0, 00:10:01.918 "r_mbytes_per_sec": 0, 00:10:01.918 "w_mbytes_per_sec": 0 00:10:01.918 }, 00:10:01.918 "claimed": true, 00:10:01.918 "claim_type": "exclusive_write", 00:10:01.918 "zoned": false, 00:10:01.918 "supported_io_types": { 00:10:01.918 "read": true, 00:10:01.918 "write": true, 00:10:01.918 "unmap": true, 00:10:01.918 "flush": true, 00:10:01.918 "reset": true, 00:10:01.918 "nvme_admin": false, 00:10:01.918 "nvme_io": false, 00:10:01.918 "nvme_io_md": false, 00:10:01.918 "write_zeroes": true, 00:10:01.918 "zcopy": true, 00:10:01.918 "get_zone_info": false, 00:10:01.918 "zone_management": false, 00:10:01.918 "zone_append": false, 00:10:01.918 "compare": false, 00:10:01.919 "compare_and_write": false, 00:10:01.919 "abort": true, 00:10:01.919 "seek_hole": false, 00:10:01.919 "seek_data": false, 00:10:01.919 "copy": true, 00:10:01.919 "nvme_iov_md": false 00:10:01.919 }, 00:10:01.919 "memory_domains": [ 00:10:01.919 { 00:10:01.919 "dma_device_id": "system", 00:10:01.919 "dma_device_type": 1 00:10:01.919 }, 00:10:01.919 { 00:10:01.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.919 "dma_device_type": 2 00:10:01.919 } 00:10:01.919 ], 00:10:01.919 "driver_specific": {} 00:10:01.919 }' 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:01.919 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:02.176 [2024-07-15 18:23:54.539978] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.176 [2024-07-15 18:23:54.540004] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.176 [2024-07-15 18:23:54.540042] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.176 [2024-07-15 18:23:54.540056] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.176 [2024-07-15 18:23:54.540060] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x192936834a00 name Existed_Raid, state offline 00:10:02.176 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 52705 00:10:02.176 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 52705 ']' 00:10:02.176 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 52705 00:10:02.434 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:02.434 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:02.434 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 52705 00:10:02.434 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:10:02.434 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:02.434 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:02.434 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52705' 00:10:02.435 killing process with pid 52705 00:10:02.435 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 52705 00:10:02.435 [2024-07-15 18:23:54.568632] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.435 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 52705 00:10:02.435 [2024-07-15 18:23:54.590959] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.435 18:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:02.435 00:10:02.435 real 0m24.244s 00:10:02.435 user 0m44.291s 00:10:02.435 sys 0m3.321s 00:10:02.435 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.435 18:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.435 ************************************ 00:10:02.435 END TEST raid_state_function_test_sb 00:10:02.435 ************************************ 00:10:02.693 18:23:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:02.693 18:23:54 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:02.693 18:23:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:02.693 18:23:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.693 18:23:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.693 ************************************ 00:10:02.693 START TEST raid_superblock_test 00:10:02.693 ************************************ 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=53433 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 53433 /var/tmp/spdk-raid.sock 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 53433 ']' 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.693 18:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.693 [2024-07-15 18:23:54.870703] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:10:02.693 [2024-07-15 18:23:54.870976] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:03.260 EAL: TSC is not safe to use in SMP mode 00:10:03.260 EAL: TSC is not invariant 00:10:03.260 [2024-07-15 18:23:55.470901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.260 [2024-07-15 18:23:55.584936] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:03.260 [2024-07-15 18:23:55.587061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.260 [2024-07-15 18:23:55.587856] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.260 [2024-07-15 18:23:55.587873] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:03.827 18:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:03.827 malloc1 00:10:03.827 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:04.084 [2024-07-15 18:23:56.428488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:04.084 [2024-07-15 18:23:56.428558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.084 [2024-07-15 18:23:56.428571] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbcf8f234780 00:10:04.084 [2024-07-15 18:23:56.428580] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.084 [2024-07-15 18:23:56.429604] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.085 [2024-07-15 18:23:56.429634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:04.085 pt1 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.085 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:04.343 malloc2 00:10:04.343 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.687 [2024-07-15 18:23:56.928474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.687 [2024-07-15 18:23:56.928530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.687 [2024-07-15 18:23:56.928544] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbcf8f234c80 00:10:04.687 [2024-07-15 18:23:56.928552] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.687 [2024-07-15 18:23:56.929283] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.687 [2024-07-15 18:23:56.929309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.687 pt2 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.687 18:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:04.944 malloc3 00:10:04.944 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:05.201 [2024-07-15 18:23:57.464470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:05.201 [2024-07-15 18:23:57.464533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.201 [2024-07-15 18:23:57.464545] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbcf8f235180 00:10:05.201 [2024-07-15 18:23:57.464554] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.201 [2024-07-15 18:23:57.465305] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.201 [2024-07-15 18:23:57.465336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:05.201 pt3 00:10:05.201 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:05.201 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:05.201 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:05.459 [2024-07-15 18:23:57.704476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:05.459 [2024-07-15 18:23:57.705141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:05.459 [2024-07-15 18:23:57.705165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:05.459 [2024-07-15 18:23:57.705218] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbcf8f235400 00:10:05.459 [2024-07-15 18:23:57.705224] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:05.459 [2024-07-15 18:23:57.705260] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbcf8f297e20 00:10:05.459 [2024-07-15 18:23:57.705358] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbcf8f235400 00:10:05.459 [2024-07-15 18:23:57.705364] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbcf8f235400 00:10:05.459 [2024-07-15 18:23:57.705393] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.459 18:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.718 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:05.718 "name": "raid_bdev1", 00:10:05.718 "uuid": "66831aad-42d7-11ef-9ade-d5fc5159efa5", 00:10:05.718 "strip_size_kb": 64, 00:10:05.718 "state": "online", 00:10:05.718 "raid_level": "raid0", 00:10:05.718 "superblock": true, 00:10:05.718 "num_base_bdevs": 3, 00:10:05.718 "num_base_bdevs_discovered": 3, 00:10:05.718 "num_base_bdevs_operational": 3, 00:10:05.718 "base_bdevs_list": [ 00:10:05.718 { 00:10:05.718 "name": "pt1", 00:10:05.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.718 "is_configured": true, 00:10:05.718 "data_offset": 2048, 00:10:05.718 "data_size": 63488 00:10:05.718 }, 00:10:05.718 { 00:10:05.718 "name": "pt2", 00:10:05.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.718 "is_configured": true, 00:10:05.718 "data_offset": 2048, 00:10:05.718 "data_size": 63488 00:10:05.718 }, 00:10:05.718 { 00:10:05.718 "name": "pt3", 00:10:05.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.718 "is_configured": true, 00:10:05.718 "data_offset": 2048, 00:10:05.718 "data_size": 63488 00:10:05.718 } 00:10:05.718 ] 00:10:05.718 }' 00:10:05.718 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:05.718 18:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:06.283 [2024-07-15 18:23:58.640505] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.283 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:06.283 "name": "raid_bdev1", 00:10:06.283 "aliases": [ 00:10:06.283 "66831aad-42d7-11ef-9ade-d5fc5159efa5" 00:10:06.283 ], 00:10:06.283 "product_name": "Raid Volume", 00:10:06.283 "block_size": 512, 00:10:06.283 "num_blocks": 190464, 00:10:06.283 "uuid": "66831aad-42d7-11ef-9ade-d5fc5159efa5", 00:10:06.283 "assigned_rate_limits": { 00:10:06.283 "rw_ios_per_sec": 0, 00:10:06.283 "rw_mbytes_per_sec": 0, 00:10:06.283 "r_mbytes_per_sec": 0, 00:10:06.283 "w_mbytes_per_sec": 0 00:10:06.283 }, 00:10:06.283 "claimed": false, 00:10:06.283 "zoned": false, 00:10:06.283 "supported_io_types": { 00:10:06.283 "read": true, 00:10:06.283 "write": true, 00:10:06.283 "unmap": true, 00:10:06.283 "flush": true, 00:10:06.283 "reset": true, 00:10:06.283 "nvme_admin": false, 00:10:06.283 "nvme_io": false, 00:10:06.283 "nvme_io_md": false, 00:10:06.283 "write_zeroes": true, 00:10:06.283 "zcopy": false, 00:10:06.283 "get_zone_info": false, 00:10:06.283 "zone_management": false, 00:10:06.283 "zone_append": false, 00:10:06.283 "compare": false, 00:10:06.283 "compare_and_write": false, 00:10:06.283 "abort": false, 00:10:06.283 "seek_hole": false, 00:10:06.283 "seek_data": false, 00:10:06.283 "copy": false, 00:10:06.283 "nvme_iov_md": false 00:10:06.283 }, 00:10:06.283 "memory_domains": [ 00:10:06.283 { 00:10:06.283 "dma_device_id": "system", 00:10:06.283 "dma_device_type": 1 00:10:06.283 }, 00:10:06.283 { 00:10:06.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.283 "dma_device_type": 2 00:10:06.283 }, 00:10:06.283 { 00:10:06.283 "dma_device_id": "system", 00:10:06.283 "dma_device_type": 1 00:10:06.283 }, 00:10:06.283 { 00:10:06.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.283 "dma_device_type": 2 00:10:06.283 }, 00:10:06.283 { 00:10:06.283 "dma_device_id": "system", 00:10:06.283 "dma_device_type": 1 00:10:06.283 }, 00:10:06.283 { 00:10:06.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.283 "dma_device_type": 2 00:10:06.283 } 00:10:06.283 ], 00:10:06.283 "driver_specific": { 00:10:06.283 "raid": { 00:10:06.283 "uuid": "66831aad-42d7-11ef-9ade-d5fc5159efa5", 00:10:06.283 "strip_size_kb": 64, 00:10:06.283 "state": "online", 00:10:06.283 "raid_level": "raid0", 00:10:06.283 "superblock": true, 00:10:06.283 "num_base_bdevs": 3, 00:10:06.283 "num_base_bdevs_discovered": 3, 00:10:06.283 "num_base_bdevs_operational": 3, 00:10:06.283 "base_bdevs_list": [ 00:10:06.283 { 00:10:06.283 "name": "pt1", 00:10:06.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.283 "is_configured": true, 00:10:06.283 "data_offset": 2048, 00:10:06.283 "data_size": 63488 00:10:06.283 }, 00:10:06.283 { 00:10:06.283 "name": "pt2", 00:10:06.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.283 "is_configured": true, 00:10:06.283 "data_offset": 2048, 00:10:06.283 "data_size": 63488 00:10:06.283 }, 00:10:06.283 { 00:10:06.283 "name": "pt3", 00:10:06.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.283 "is_configured": true, 00:10:06.283 "data_offset": 2048, 00:10:06.283 "data_size": 63488 00:10:06.283 } 00:10:06.283 ] 00:10:06.283 } 00:10:06.283 } 00:10:06.283 }' 00:10:06.284 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.542 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:06.542 pt2 00:10:06.542 pt3' 00:10:06.542 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:06.542 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:06.542 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:06.542 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:06.542 "name": "pt1", 00:10:06.542 "aliases": [ 00:10:06.542 "00000000-0000-0000-0000-000000000001" 00:10:06.542 ], 00:10:06.542 "product_name": "passthru", 00:10:06.542 "block_size": 512, 00:10:06.542 "num_blocks": 65536, 00:10:06.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.542 "assigned_rate_limits": { 00:10:06.542 "rw_ios_per_sec": 0, 00:10:06.542 "rw_mbytes_per_sec": 0, 00:10:06.542 "r_mbytes_per_sec": 0, 00:10:06.542 "w_mbytes_per_sec": 0 00:10:06.542 }, 00:10:06.542 "claimed": true, 00:10:06.542 "claim_type": "exclusive_write", 00:10:06.542 "zoned": false, 00:10:06.542 "supported_io_types": { 00:10:06.542 "read": true, 00:10:06.542 "write": true, 00:10:06.542 "unmap": true, 00:10:06.542 "flush": true, 00:10:06.542 "reset": true, 00:10:06.542 "nvme_admin": false, 00:10:06.542 "nvme_io": false, 00:10:06.542 "nvme_io_md": false, 00:10:06.542 "write_zeroes": true, 00:10:06.542 "zcopy": true, 00:10:06.542 "get_zone_info": false, 00:10:06.542 "zone_management": false, 00:10:06.542 "zone_append": false, 00:10:06.542 "compare": false, 00:10:06.542 "compare_and_write": false, 00:10:06.542 "abort": true, 00:10:06.542 "seek_hole": false, 00:10:06.542 "seek_data": false, 00:10:06.542 "copy": true, 00:10:06.542 "nvme_iov_md": false 00:10:06.542 }, 00:10:06.542 "memory_domains": [ 00:10:06.542 { 00:10:06.542 "dma_device_id": "system", 00:10:06.542 "dma_device_type": 1 00:10:06.542 }, 00:10:06.542 { 00:10:06.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.542 "dma_device_type": 2 00:10:06.542 } 00:10:06.542 ], 00:10:06.542 "driver_specific": { 00:10:06.542 "passthru": { 00:10:06.542 "name": "pt1", 00:10:06.542 "base_bdev_name": "malloc1" 00:10:06.542 } 00:10:06.542 } 00:10:06.542 }' 00:10:06.542 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:06.802 18:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:07.061 "name": "pt2", 00:10:07.061 "aliases": [ 00:10:07.061 "00000000-0000-0000-0000-000000000002" 00:10:07.061 ], 00:10:07.061 "product_name": "passthru", 00:10:07.061 "block_size": 512, 00:10:07.061 "num_blocks": 65536, 00:10:07.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.061 "assigned_rate_limits": { 00:10:07.061 "rw_ios_per_sec": 0, 00:10:07.061 "rw_mbytes_per_sec": 0, 00:10:07.061 "r_mbytes_per_sec": 0, 00:10:07.061 "w_mbytes_per_sec": 0 00:10:07.061 }, 00:10:07.061 "claimed": true, 00:10:07.061 "claim_type": "exclusive_write", 00:10:07.061 "zoned": false, 00:10:07.061 "supported_io_types": { 00:10:07.061 "read": true, 00:10:07.061 "write": true, 00:10:07.061 "unmap": true, 00:10:07.061 "flush": true, 00:10:07.061 "reset": true, 00:10:07.061 "nvme_admin": false, 00:10:07.061 "nvme_io": false, 00:10:07.061 "nvme_io_md": false, 00:10:07.061 "write_zeroes": true, 00:10:07.061 "zcopy": true, 00:10:07.061 "get_zone_info": false, 00:10:07.061 "zone_management": false, 00:10:07.061 "zone_append": false, 00:10:07.061 "compare": false, 00:10:07.061 "compare_and_write": false, 00:10:07.061 "abort": true, 00:10:07.061 "seek_hole": false, 00:10:07.061 "seek_data": false, 00:10:07.061 "copy": true, 00:10:07.061 "nvme_iov_md": false 00:10:07.061 }, 00:10:07.061 "memory_domains": [ 00:10:07.061 { 00:10:07.061 "dma_device_id": "system", 00:10:07.061 "dma_device_type": 1 00:10:07.061 }, 00:10:07.061 { 00:10:07.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.061 "dma_device_type": 2 00:10:07.061 } 00:10:07.061 ], 00:10:07.061 "driver_specific": { 00:10:07.061 "passthru": { 00:10:07.061 "name": "pt2", 00:10:07.061 "base_bdev_name": "malloc2" 00:10:07.061 } 00:10:07.061 } 00:10:07.061 }' 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:07.061 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:07.320 "name": "pt3", 00:10:07.320 "aliases": [ 00:10:07.320 "00000000-0000-0000-0000-000000000003" 00:10:07.320 ], 00:10:07.320 "product_name": "passthru", 00:10:07.320 "block_size": 512, 00:10:07.320 "num_blocks": 65536, 00:10:07.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.320 "assigned_rate_limits": { 00:10:07.320 "rw_ios_per_sec": 0, 00:10:07.320 "rw_mbytes_per_sec": 0, 00:10:07.320 "r_mbytes_per_sec": 0, 00:10:07.320 "w_mbytes_per_sec": 0 00:10:07.320 }, 00:10:07.320 "claimed": true, 00:10:07.320 "claim_type": "exclusive_write", 00:10:07.320 "zoned": false, 00:10:07.320 "supported_io_types": { 00:10:07.320 "read": true, 00:10:07.320 "write": true, 00:10:07.320 "unmap": true, 00:10:07.320 "flush": true, 00:10:07.320 "reset": true, 00:10:07.320 "nvme_admin": false, 00:10:07.320 "nvme_io": false, 00:10:07.320 "nvme_io_md": false, 00:10:07.320 "write_zeroes": true, 00:10:07.320 "zcopy": true, 00:10:07.320 "get_zone_info": false, 00:10:07.320 "zone_management": false, 00:10:07.320 "zone_append": false, 00:10:07.320 "compare": false, 00:10:07.320 "compare_and_write": false, 00:10:07.320 "abort": true, 00:10:07.320 "seek_hole": false, 00:10:07.320 "seek_data": false, 00:10:07.320 "copy": true, 00:10:07.320 "nvme_iov_md": false 00:10:07.320 }, 00:10:07.320 "memory_domains": [ 00:10:07.320 { 00:10:07.320 "dma_device_id": "system", 00:10:07.320 "dma_device_type": 1 00:10:07.320 }, 00:10:07.320 { 00:10:07.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.320 "dma_device_type": 2 00:10:07.320 } 00:10:07.320 ], 00:10:07.320 "driver_specific": { 00:10:07.320 "passthru": { 00:10:07.320 "name": "pt3", 00:10:07.320 "base_bdev_name": "malloc3" 00:10:07.320 } 00:10:07.320 } 00:10:07.320 }' 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:07.320 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:07.579 [2024-07-15 18:23:59.852513] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.579 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=66831aad-42d7-11ef-9ade-d5fc5159efa5 00:10:07.579 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 66831aad-42d7-11ef-9ade-d5fc5159efa5 ']' 00:10:07.579 18:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:07.837 [2024-07-15 18:24:00.132466] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.837 [2024-07-15 18:24:00.132494] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.837 [2024-07-15 18:24:00.132518] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.837 [2024-07-15 18:24:00.132533] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.837 [2024-07-15 18:24:00.132538] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbcf8f235400 name raid_bdev1, state offline 00:10:07.837 18:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:07.837 18:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.095 18:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:08.095 18:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:08.095 18:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.095 18:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:08.662 18:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.662 18:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:08.920 18:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.920 18:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:08.920 18:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:08.920 18:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:09.487 [2024-07-15 18:24:01.832486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:09.487 [2024-07-15 18:24:01.833222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:09.487 [2024-07-15 18:24:01.833245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:09.487 [2024-07-15 18:24:01.833262] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:09.487 [2024-07-15 18:24:01.833299] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:09.487 [2024-07-15 18:24:01.833312] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:09.487 [2024-07-15 18:24:01.833321] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:09.487 [2024-07-15 18:24:01.833325] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbcf8f235180 name raid_bdev1, state configuring 00:10:09.487 request: 00:10:09.487 { 00:10:09.487 "name": "raid_bdev1", 00:10:09.487 "raid_level": "raid0", 00:10:09.487 "base_bdevs": [ 00:10:09.487 "malloc1", 00:10:09.487 "malloc2", 00:10:09.487 "malloc3" 00:10:09.487 ], 00:10:09.487 "strip_size_kb": 64, 00:10:09.487 "superblock": false, 00:10:09.487 "method": "bdev_raid_create", 00:10:09.487 "req_id": 1 00:10:09.487 } 00:10:09.487 Got JSON-RPC error response 00:10:09.487 response: 00:10:09.487 { 00:10:09.487 "code": -17, 00:10:09.487 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:09.487 } 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.487 18:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:09.745 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:09.745 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:09.745 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.003 [2024-07-15 18:24:02.352484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.003 [2024-07-15 18:24:02.352553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.003 [2024-07-15 18:24:02.352566] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbcf8f234c80 00:10:10.003 [2024-07-15 18:24:02.352574] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.003 [2024-07-15 18:24:02.353315] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.003 [2024-07-15 18:24:02.353345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.003 [2024-07-15 18:24:02.353383] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:10.003 [2024-07-15 18:24:02.353395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.003 pt1 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.003 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.569 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:10.569 "name": "raid_bdev1", 00:10:10.569 "uuid": "66831aad-42d7-11ef-9ade-d5fc5159efa5", 00:10:10.569 "strip_size_kb": 64, 00:10:10.569 "state": "configuring", 00:10:10.569 "raid_level": "raid0", 00:10:10.569 "superblock": true, 00:10:10.569 "num_base_bdevs": 3, 00:10:10.569 "num_base_bdevs_discovered": 1, 00:10:10.569 "num_base_bdevs_operational": 3, 00:10:10.569 "base_bdevs_list": [ 00:10:10.569 { 00:10:10.569 "name": "pt1", 00:10:10.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.569 "is_configured": true, 00:10:10.569 "data_offset": 2048, 00:10:10.569 "data_size": 63488 00:10:10.569 }, 00:10:10.569 { 00:10:10.569 "name": null, 00:10:10.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.569 "is_configured": false, 00:10:10.569 "data_offset": 2048, 00:10:10.569 "data_size": 63488 00:10:10.569 }, 00:10:10.569 { 00:10:10.569 "name": null, 00:10:10.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.569 "is_configured": false, 00:10:10.569 "data_offset": 2048, 00:10:10.569 "data_size": 63488 00:10:10.569 } 00:10:10.569 ] 00:10:10.569 }' 00:10:10.569 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:10.569 18:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.827 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:10:10.827 18:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.085 [2024-07-15 18:24:03.220499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.085 [2024-07-15 18:24:03.220560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.085 [2024-07-15 18:24:03.220573] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbcf8f235680 00:10:11.085 [2024-07-15 18:24:03.220582] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.085 [2024-07-15 18:24:03.220711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.085 [2024-07-15 18:24:03.220723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.085 [2024-07-15 18:24:03.220746] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.085 [2024-07-15 18:24:03.220755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.085 pt2 00:10:11.085 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:11.343 [2024-07-15 18:24:03.488502] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.343 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.603 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:11.603 "name": "raid_bdev1", 00:10:11.603 "uuid": "66831aad-42d7-11ef-9ade-d5fc5159efa5", 00:10:11.603 "strip_size_kb": 64, 00:10:11.603 "state": "configuring", 00:10:11.603 "raid_level": "raid0", 00:10:11.603 "superblock": true, 00:10:11.603 "num_base_bdevs": 3, 00:10:11.603 "num_base_bdevs_discovered": 1, 00:10:11.603 "num_base_bdevs_operational": 3, 00:10:11.603 "base_bdevs_list": [ 00:10:11.603 { 00:10:11.603 "name": "pt1", 00:10:11.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.603 "is_configured": true, 00:10:11.603 "data_offset": 2048, 00:10:11.603 "data_size": 63488 00:10:11.603 }, 00:10:11.603 { 00:10:11.603 "name": null, 00:10:11.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.603 "is_configured": false, 00:10:11.603 "data_offset": 2048, 00:10:11.603 "data_size": 63488 00:10:11.603 }, 00:10:11.603 { 00:10:11.603 "name": null, 00:10:11.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.603 "is_configured": false, 00:10:11.603 "data_offset": 2048, 00:10:11.603 "data_size": 63488 00:10:11.603 } 00:10:11.603 ] 00:10:11.603 }' 00:10:11.603 18:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:11.603 18:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.861 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:11.861 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:11.861 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.118 [2024-07-15 18:24:04.412512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.118 [2024-07-15 18:24:04.412575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.118 [2024-07-15 18:24:04.412588] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbcf8f235680 00:10:12.118 [2024-07-15 18:24:04.412597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.118 [2024-07-15 18:24:04.412720] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.118 [2024-07-15 18:24:04.412732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.118 [2024-07-15 18:24:04.412756] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.118 [2024-07-15 18:24:04.412765] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.118 pt2 00:10:12.118 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:12.118 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:12.118 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.376 [2024-07-15 18:24:04.704516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.376 [2024-07-15 18:24:04.704574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.376 [2024-07-15 18:24:04.704588] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbcf8f235400 00:10:12.376 [2024-07-15 18:24:04.704596] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.376 [2024-07-15 18:24:04.704736] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.376 [2024-07-15 18:24:04.704755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.376 [2024-07-15 18:24:04.704780] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:12.376 [2024-07-15 18:24:04.704790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.376 [2024-07-15 18:24:04.704826] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbcf8f234780 00:10:12.376 [2024-07-15 18:24:04.704831] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:12.376 [2024-07-15 18:24:04.704854] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbcf8f297e20 00:10:12.376 [2024-07-15 18:24:04.704906] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbcf8f234780 00:10:12.376 [2024-07-15 18:24:04.704912] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbcf8f234780 00:10:12.376 [2024-07-15 18:24:04.704941] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.376 pt3 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.377 18:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.635 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.635 "name": "raid_bdev1", 00:10:12.635 "uuid": "66831aad-42d7-11ef-9ade-d5fc5159efa5", 00:10:12.635 "strip_size_kb": 64, 00:10:12.635 "state": "online", 00:10:12.635 "raid_level": "raid0", 00:10:12.635 "superblock": true, 00:10:12.635 "num_base_bdevs": 3, 00:10:12.635 "num_base_bdevs_discovered": 3, 00:10:12.635 "num_base_bdevs_operational": 3, 00:10:12.635 "base_bdevs_list": [ 00:10:12.635 { 00:10:12.635 "name": "pt1", 00:10:12.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.635 "is_configured": true, 00:10:12.635 "data_offset": 2048, 00:10:12.635 "data_size": 63488 00:10:12.635 }, 00:10:12.635 { 00:10:12.635 "name": "pt2", 00:10:12.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.635 "is_configured": true, 00:10:12.635 "data_offset": 2048, 00:10:12.635 "data_size": 63488 00:10:12.635 }, 00:10:12.635 { 00:10:12.635 "name": "pt3", 00:10:12.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.635 "is_configured": true, 00:10:12.635 "data_offset": 2048, 00:10:12.635 "data_size": 63488 00:10:12.635 } 00:10:12.635 ] 00:10:12.635 }' 00:10:12.635 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.635 18:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.201 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.201 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:13.201 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:13.201 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:13.201 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:13.201 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:13.201 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:13.201 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:13.461 [2024-07-15 18:24:05.588561] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.461 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:13.461 "name": "raid_bdev1", 00:10:13.461 "aliases": [ 00:10:13.461 "66831aad-42d7-11ef-9ade-d5fc5159efa5" 00:10:13.461 ], 00:10:13.461 "product_name": "Raid Volume", 00:10:13.461 "block_size": 512, 00:10:13.461 "num_blocks": 190464, 00:10:13.461 "uuid": "66831aad-42d7-11ef-9ade-d5fc5159efa5", 00:10:13.461 "assigned_rate_limits": { 00:10:13.461 "rw_ios_per_sec": 0, 00:10:13.461 "rw_mbytes_per_sec": 0, 00:10:13.461 "r_mbytes_per_sec": 0, 00:10:13.461 "w_mbytes_per_sec": 0 00:10:13.461 }, 00:10:13.461 "claimed": false, 00:10:13.461 "zoned": false, 00:10:13.461 "supported_io_types": { 00:10:13.461 "read": true, 00:10:13.461 "write": true, 00:10:13.461 "unmap": true, 00:10:13.461 "flush": true, 00:10:13.461 "reset": true, 00:10:13.461 "nvme_admin": false, 00:10:13.461 "nvme_io": false, 00:10:13.461 "nvme_io_md": false, 00:10:13.461 "write_zeroes": true, 00:10:13.461 "zcopy": false, 00:10:13.461 "get_zone_info": false, 00:10:13.461 "zone_management": false, 00:10:13.461 "zone_append": false, 00:10:13.461 "compare": false, 00:10:13.461 "compare_and_write": false, 00:10:13.461 "abort": false, 00:10:13.461 "seek_hole": false, 00:10:13.461 "seek_data": false, 00:10:13.461 "copy": false, 00:10:13.461 "nvme_iov_md": false 00:10:13.461 }, 00:10:13.461 "memory_domains": [ 00:10:13.461 { 00:10:13.461 "dma_device_id": "system", 00:10:13.461 "dma_device_type": 1 00:10:13.461 }, 00:10:13.461 { 00:10:13.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.461 "dma_device_type": 2 00:10:13.461 }, 00:10:13.461 { 00:10:13.461 "dma_device_id": "system", 00:10:13.461 "dma_device_type": 1 00:10:13.461 }, 00:10:13.461 { 00:10:13.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.461 "dma_device_type": 2 00:10:13.461 }, 00:10:13.461 { 00:10:13.461 "dma_device_id": "system", 00:10:13.461 "dma_device_type": 1 00:10:13.461 }, 00:10:13.461 { 00:10:13.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.462 "dma_device_type": 2 00:10:13.462 } 00:10:13.462 ], 00:10:13.462 "driver_specific": { 00:10:13.462 "raid": { 00:10:13.462 "uuid": "66831aad-42d7-11ef-9ade-d5fc5159efa5", 00:10:13.462 "strip_size_kb": 64, 00:10:13.462 "state": "online", 00:10:13.462 "raid_level": "raid0", 00:10:13.462 "superblock": true, 00:10:13.462 "num_base_bdevs": 3, 00:10:13.462 "num_base_bdevs_discovered": 3, 00:10:13.462 "num_base_bdevs_operational": 3, 00:10:13.462 "base_bdevs_list": [ 00:10:13.462 { 00:10:13.462 "name": "pt1", 00:10:13.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.462 "is_configured": true, 00:10:13.462 "data_offset": 2048, 00:10:13.462 "data_size": 63488 00:10:13.462 }, 00:10:13.462 { 00:10:13.462 "name": "pt2", 00:10:13.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.462 "is_configured": true, 00:10:13.462 "data_offset": 2048, 00:10:13.462 "data_size": 63488 00:10:13.462 }, 00:10:13.462 { 00:10:13.462 "name": "pt3", 00:10:13.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.462 "is_configured": true, 00:10:13.462 "data_offset": 2048, 00:10:13.462 "data_size": 63488 00:10:13.462 } 00:10:13.462 ] 00:10:13.462 } 00:10:13.462 } 00:10:13.462 }' 00:10:13.462 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.462 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:13.462 pt2 00:10:13.462 pt3' 00:10:13.462 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.462 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:13.462 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.721 "name": "pt1", 00:10:13.721 "aliases": [ 00:10:13.721 "00000000-0000-0000-0000-000000000001" 00:10:13.721 ], 00:10:13.721 "product_name": "passthru", 00:10:13.721 "block_size": 512, 00:10:13.721 "num_blocks": 65536, 00:10:13.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.721 "assigned_rate_limits": { 00:10:13.721 "rw_ios_per_sec": 0, 00:10:13.721 "rw_mbytes_per_sec": 0, 00:10:13.721 "r_mbytes_per_sec": 0, 00:10:13.721 "w_mbytes_per_sec": 0 00:10:13.721 }, 00:10:13.721 "claimed": true, 00:10:13.721 "claim_type": "exclusive_write", 00:10:13.721 "zoned": false, 00:10:13.721 "supported_io_types": { 00:10:13.721 "read": true, 00:10:13.721 "write": true, 00:10:13.721 "unmap": true, 00:10:13.721 "flush": true, 00:10:13.721 "reset": true, 00:10:13.721 "nvme_admin": false, 00:10:13.721 "nvme_io": false, 00:10:13.721 "nvme_io_md": false, 00:10:13.721 "write_zeroes": true, 00:10:13.721 "zcopy": true, 00:10:13.721 "get_zone_info": false, 00:10:13.721 "zone_management": false, 00:10:13.721 "zone_append": false, 00:10:13.721 "compare": false, 00:10:13.721 "compare_and_write": false, 00:10:13.721 "abort": true, 00:10:13.721 "seek_hole": false, 00:10:13.721 "seek_data": false, 00:10:13.721 "copy": true, 00:10:13.721 "nvme_iov_md": false 00:10:13.721 }, 00:10:13.721 "memory_domains": [ 00:10:13.721 { 00:10:13.721 "dma_device_id": "system", 00:10:13.721 "dma_device_type": 1 00:10:13.721 }, 00:10:13.721 { 00:10:13.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.721 "dma_device_type": 2 00:10:13.721 } 00:10:13.721 ], 00:10:13.721 "driver_specific": { 00:10:13.721 "passthru": { 00:10:13.721 "name": "pt1", 00:10:13.721 "base_bdev_name": "malloc1" 00:10:13.721 } 00:10:13.721 } 00:10:13.721 }' 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.721 18:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:13.979 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.979 "name": "pt2", 00:10:13.979 "aliases": [ 00:10:13.979 "00000000-0000-0000-0000-000000000002" 00:10:13.979 ], 00:10:13.979 "product_name": "passthru", 00:10:13.979 "block_size": 512, 00:10:13.979 "num_blocks": 65536, 00:10:13.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.979 "assigned_rate_limits": { 00:10:13.979 "rw_ios_per_sec": 0, 00:10:13.979 "rw_mbytes_per_sec": 0, 00:10:13.979 "r_mbytes_per_sec": 0, 00:10:13.979 "w_mbytes_per_sec": 0 00:10:13.979 }, 00:10:13.979 "claimed": true, 00:10:13.979 "claim_type": "exclusive_write", 00:10:13.979 "zoned": false, 00:10:13.979 "supported_io_types": { 00:10:13.979 "read": true, 00:10:13.979 "write": true, 00:10:13.979 "unmap": true, 00:10:13.979 "flush": true, 00:10:13.979 "reset": true, 00:10:13.979 "nvme_admin": false, 00:10:13.979 "nvme_io": false, 00:10:13.979 "nvme_io_md": false, 00:10:13.979 "write_zeroes": true, 00:10:13.979 "zcopy": true, 00:10:13.979 "get_zone_info": false, 00:10:13.979 "zone_management": false, 00:10:13.979 "zone_append": false, 00:10:13.979 "compare": false, 00:10:13.979 "compare_and_write": false, 00:10:13.979 "abort": true, 00:10:13.980 "seek_hole": false, 00:10:13.980 "seek_data": false, 00:10:13.980 "copy": true, 00:10:13.980 "nvme_iov_md": false 00:10:13.980 }, 00:10:13.980 "memory_domains": [ 00:10:13.980 { 00:10:13.980 "dma_device_id": "system", 00:10:13.980 "dma_device_type": 1 00:10:13.980 }, 00:10:13.980 { 00:10:13.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.980 "dma_device_type": 2 00:10:13.980 } 00:10:13.980 ], 00:10:13.980 "driver_specific": { 00:10:13.980 "passthru": { 00:10:13.980 "name": "pt2", 00:10:13.980 "base_bdev_name": "malloc2" 00:10:13.980 } 00:10:13.980 } 00:10:13.980 }' 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:13.980 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:14.237 "name": "pt3", 00:10:14.237 "aliases": [ 00:10:14.237 "00000000-0000-0000-0000-000000000003" 00:10:14.237 ], 00:10:14.237 "product_name": "passthru", 00:10:14.237 "block_size": 512, 00:10:14.237 "num_blocks": 65536, 00:10:14.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.237 "assigned_rate_limits": { 00:10:14.237 "rw_ios_per_sec": 0, 00:10:14.237 "rw_mbytes_per_sec": 0, 00:10:14.237 "r_mbytes_per_sec": 0, 00:10:14.237 "w_mbytes_per_sec": 0 00:10:14.237 }, 00:10:14.237 "claimed": true, 00:10:14.237 "claim_type": "exclusive_write", 00:10:14.237 "zoned": false, 00:10:14.237 "supported_io_types": { 00:10:14.237 "read": true, 00:10:14.237 "write": true, 00:10:14.237 "unmap": true, 00:10:14.237 "flush": true, 00:10:14.237 "reset": true, 00:10:14.237 "nvme_admin": false, 00:10:14.237 "nvme_io": false, 00:10:14.237 "nvme_io_md": false, 00:10:14.237 "write_zeroes": true, 00:10:14.237 "zcopy": true, 00:10:14.237 "get_zone_info": false, 00:10:14.237 "zone_management": false, 00:10:14.237 "zone_append": false, 00:10:14.237 "compare": false, 00:10:14.237 "compare_and_write": false, 00:10:14.237 "abort": true, 00:10:14.237 "seek_hole": false, 00:10:14.237 "seek_data": false, 00:10:14.237 "copy": true, 00:10:14.237 "nvme_iov_md": false 00:10:14.237 }, 00:10:14.237 "memory_domains": [ 00:10:14.237 { 00:10:14.237 "dma_device_id": "system", 00:10:14.237 "dma_device_type": 1 00:10:14.237 }, 00:10:14.237 { 00:10:14.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.237 "dma_device_type": 2 00:10:14.237 } 00:10:14.237 ], 00:10:14.237 "driver_specific": { 00:10:14.237 "passthru": { 00:10:14.237 "name": "pt3", 00:10:14.237 "base_bdev_name": "malloc3" 00:10:14.237 } 00:10:14.237 } 00:10:14.237 }' 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.237 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.495 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:14.495 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:14.495 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:14.765 [2024-07-15 18:24:06.888592] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 66831aad-42d7-11ef-9ade-d5fc5159efa5 '!=' 66831aad-42d7-11ef-9ade-d5fc5159efa5 ']' 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 53433 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 53433 ']' 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 53433 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 53433 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:14.765 killing process with pid 53433 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53433' 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 53433 00:10:14.765 [2024-07-15 18:24:06.919480] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.765 [2024-07-15 18:24:06.919510] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.765 [2024-07-15 18:24:06.919524] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.765 [2024-07-15 18:24:06.919529] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbcf8f234780 name raid_bdev1, state offline 00:10:14.765 18:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 53433 00:10:14.765 [2024-07-15 18:24:06.942461] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.023 18:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:15.023 00:10:15.023 real 0m12.298s 00:10:15.023 user 0m21.754s 00:10:15.023 sys 0m2.028s 00:10:15.023 18:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.023 ************************************ 00:10:15.023 18:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.023 END TEST raid_superblock_test 00:10:15.023 ************************************ 00:10:15.023 18:24:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:15.023 18:24:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:15.023 18:24:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:15.023 18:24:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.023 18:24:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.023 ************************************ 00:10:15.023 START TEST raid_read_error_test 00:10:15.023 ************************************ 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qNSAkpAHHq 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53788 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53788 /var/tmp/spdk-raid.sock 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 53788 ']' 00:10:15.023 18:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:15.024 18:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:15.024 18:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:15.024 18:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.024 18:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.024 [2024-07-15 18:24:07.223322] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:10:15.024 [2024-07-15 18:24:07.223498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:15.590 EAL: TSC is not safe to use in SMP mode 00:10:15.590 EAL: TSC is not invariant 00:10:15.590 [2024-07-15 18:24:07.845159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.590 [2024-07-15 18:24:07.954922] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:15.590 [2024-07-15 18:24:07.957028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.590 [2024-07-15 18:24:07.957797] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.590 [2024-07-15 18:24:07.957811] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.895 18:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.895 18:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:15.895 18:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:15.895 18:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.155 BaseBdev1_malloc 00:10:16.155 18:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:16.414 true 00:10:16.414 18:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.671 [2024-07-15 18:24:08.942421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.671 [2024-07-15 18:24:08.942487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.671 [2024-07-15 18:24:08.942517] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x89247e34780 00:10:16.671 [2024-07-15 18:24:08.942526] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.671 [2024-07-15 18:24:08.943350] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.671 [2024-07-15 18:24:08.943381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.671 BaseBdev1 00:10:16.671 18:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:16.671 18:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.929 BaseBdev2_malloc 00:10:16.929 18:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:17.188 true 00:10:17.188 18:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:17.445 [2024-07-15 18:24:09.822433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:17.445 [2024-07-15 18:24:09.822493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.445 [2024-07-15 18:24:09.822523] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x89247e34c80 00:10:17.445 [2024-07-15 18:24:09.822532] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.445 [2024-07-15 18:24:09.823430] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.445 [2024-07-15 18:24:09.823455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:17.445 BaseBdev2 00:10:17.703 18:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:17.703 18:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:17.960 BaseBdev3_malloc 00:10:17.960 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:18.218 true 00:10:18.219 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:18.477 [2024-07-15 18:24:10.694447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:18.477 [2024-07-15 18:24:10.694507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.477 [2024-07-15 18:24:10.694537] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x89247e35180 00:10:18.477 [2024-07-15 18:24:10.694546] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.477 [2024-07-15 18:24:10.695372] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.477 [2024-07-15 18:24:10.695401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:18.477 BaseBdev3 00:10:18.477 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:18.736 [2024-07-15 18:24:10.946470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.736 [2024-07-15 18:24:10.947175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.736 [2024-07-15 18:24:10.947209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.736 [2024-07-15 18:24:10.947270] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x89247e35400 00:10:18.736 [2024-07-15 18:24:10.947276] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:18.736 [2024-07-15 18:24:10.947317] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x89247ea0e20 00:10:18.736 [2024-07-15 18:24:10.947403] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x89247e35400 00:10:18.736 [2024-07-15 18:24:10.947407] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x89247e35400 00:10:18.736 [2024-07-15 18:24:10.947435] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.736 18:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.994 18:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.994 "name": "raid_bdev1", 00:10:18.994 "uuid": "6e67ac14-42d7-11ef-9ade-d5fc5159efa5", 00:10:18.994 "strip_size_kb": 64, 00:10:18.994 "state": "online", 00:10:18.994 "raid_level": "raid0", 00:10:18.994 "superblock": true, 00:10:18.994 "num_base_bdevs": 3, 00:10:18.994 "num_base_bdevs_discovered": 3, 00:10:18.994 "num_base_bdevs_operational": 3, 00:10:18.994 "base_bdevs_list": [ 00:10:18.994 { 00:10:18.994 "name": "BaseBdev1", 00:10:18.994 "uuid": "3e61a3b8-28d7-1c5a-8329-53fa1e1f7ed7", 00:10:18.994 "is_configured": true, 00:10:18.994 "data_offset": 2048, 00:10:18.994 "data_size": 63488 00:10:18.994 }, 00:10:18.994 { 00:10:18.994 "name": "BaseBdev2", 00:10:18.994 "uuid": "516aa6ef-cdd1-b05a-9fd5-16a99c0a42b4", 00:10:18.994 "is_configured": true, 00:10:18.994 "data_offset": 2048, 00:10:18.994 "data_size": 63488 00:10:18.994 }, 00:10:18.994 { 00:10:18.994 "name": "BaseBdev3", 00:10:18.994 "uuid": "f6e73540-bc71-e055-ab0e-d1cb3df95c93", 00:10:18.994 "is_configured": true, 00:10:18.994 "data_offset": 2048, 00:10:18.994 "data_size": 63488 00:10:18.994 } 00:10:18.994 ] 00:10:18.994 }' 00:10:18.994 18:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.994 18:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.252 18:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:19.252 18:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:19.511 [2024-07-15 18:24:11.650731] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x89247ea0ec0 00:10:20.466 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.725 18:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.985 18:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:20.985 "name": "raid_bdev1", 00:10:20.985 "uuid": "6e67ac14-42d7-11ef-9ade-d5fc5159efa5", 00:10:20.985 "strip_size_kb": 64, 00:10:20.985 "state": "online", 00:10:20.985 "raid_level": "raid0", 00:10:20.985 "superblock": true, 00:10:20.985 "num_base_bdevs": 3, 00:10:20.985 "num_base_bdevs_discovered": 3, 00:10:20.985 "num_base_bdevs_operational": 3, 00:10:20.985 "base_bdevs_list": [ 00:10:20.985 { 00:10:20.985 "name": "BaseBdev1", 00:10:20.985 "uuid": "3e61a3b8-28d7-1c5a-8329-53fa1e1f7ed7", 00:10:20.985 "is_configured": true, 00:10:20.985 "data_offset": 2048, 00:10:20.985 "data_size": 63488 00:10:20.985 }, 00:10:20.985 { 00:10:20.985 "name": "BaseBdev2", 00:10:20.985 "uuid": "516aa6ef-cdd1-b05a-9fd5-16a99c0a42b4", 00:10:20.985 "is_configured": true, 00:10:20.985 "data_offset": 2048, 00:10:20.985 "data_size": 63488 00:10:20.985 }, 00:10:20.985 { 00:10:20.985 "name": "BaseBdev3", 00:10:20.985 "uuid": "f6e73540-bc71-e055-ab0e-d1cb3df95c93", 00:10:20.985 "is_configured": true, 00:10:20.985 "data_offset": 2048, 00:10:20.985 "data_size": 63488 00:10:20.985 } 00:10:20.985 ] 00:10:20.985 }' 00:10:20.985 18:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:20.985 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.243 18:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:21.501 [2024-07-15 18:24:13.842611] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.501 [2024-07-15 18:24:13.842640] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.501 [2024-07-15 18:24:13.842972] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.501 [2024-07-15 18:24:13.842982] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.501 [2024-07-15 18:24:13.843002] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.501 [2024-07-15 18:24:13.843014] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x89247e35400 name raid_bdev1, state offline 00:10:21.501 0 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53788 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 53788 ']' 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 53788 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53788 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:21.501 killing process with pid 53788 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53788' 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 53788 00:10:21.501 [2024-07-15 18:24:13.876872] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.501 18:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 53788 00:10:21.759 [2024-07-15 18:24:13.899475] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qNSAkpAHHq 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:10:21.759 00:10:21.759 real 0m6.916s 00:10:21.759 user 0m10.920s 00:10:21.759 sys 0m1.159s 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.759 18:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.759 ************************************ 00:10:21.759 END TEST raid_read_error_test 00:10:21.759 ************************************ 00:10:22.018 18:24:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:22.018 18:24:14 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:22.018 18:24:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:22.018 18:24:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.018 18:24:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.018 ************************************ 00:10:22.018 START TEST raid_write_error_test 00:10:22.018 ************************************ 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:22.018 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.04FUXLn1ae 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53923 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53923 /var/tmp/spdk-raid.sock 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 53923 ']' 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.019 18:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.019 [2024-07-15 18:24:14.183778] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:10:22.019 [2024-07-15 18:24:14.183941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:22.587 EAL: TSC is not safe to use in SMP mode 00:10:22.587 EAL: TSC is not invariant 00:10:22.587 [2024-07-15 18:24:14.791970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.587 [2024-07-15 18:24:14.907503] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:22.587 [2024-07-15 18:24:14.910035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.587 [2024-07-15 18:24:14.910858] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.587 [2024-07-15 18:24:14.910875] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.845 18:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.845 18:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:22.845 18:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:22.845 18:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.103 BaseBdev1_malloc 00:10:23.103 18:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:23.362 true 00:10:23.362 18:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.621 [2024-07-15 18:24:15.953122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.621 [2024-07-15 18:24:15.953194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.621 [2024-07-15 18:24:15.953221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x375f36234780 00:10:23.621 [2024-07-15 18:24:15.953230] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.621 [2024-07-15 18:24:15.953907] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.621 [2024-07-15 18:24:15.953934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.621 BaseBdev1 00:10:23.621 18:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:23.621 18:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.879 BaseBdev2_malloc 00:10:23.879 18:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:24.138 true 00:10:24.138 18:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.397 [2024-07-15 18:24:16.737148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.397 [2024-07-15 18:24:16.737221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.397 [2024-07-15 18:24:16.737247] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x375f36234c80 00:10:24.397 [2024-07-15 18:24:16.737257] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.397 [2024-07-15 18:24:16.737963] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.397 [2024-07-15 18:24:16.737989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.397 BaseBdev2 00:10:24.397 18:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:24.397 18:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.655 BaseBdev3_malloc 00:10:24.655 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:24.914 true 00:10:24.914 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.173 [2024-07-15 18:24:17.525180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.173 [2024-07-15 18:24:17.525235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.173 [2024-07-15 18:24:17.525260] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x375f36235180 00:10:25.173 [2024-07-15 18:24:17.525268] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.173 [2024-07-15 18:24:17.525956] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.173 [2024-07-15 18:24:17.525981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.173 BaseBdev3 00:10:25.173 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:25.431 [2024-07-15 18:24:17.769197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.431 [2024-07-15 18:24:17.769814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.431 [2024-07-15 18:24:17.769840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.431 [2024-07-15 18:24:17.769901] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x375f36235400 00:10:25.431 [2024-07-15 18:24:17.769906] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.431 [2024-07-15 18:24:17.769945] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x375f362a0e20 00:10:25.431 [2024-07-15 18:24:17.770017] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x375f36235400 00:10:25.431 [2024-07-15 18:24:17.770022] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x375f36235400 00:10:25.431 [2024-07-15 18:24:17.770049] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.431 18:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.690 18:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:25.690 "name": "raid_bdev1", 00:10:25.690 "uuid": "7278bcea-42d7-11ef-9ade-d5fc5159efa5", 00:10:25.690 "strip_size_kb": 64, 00:10:25.690 "state": "online", 00:10:25.690 "raid_level": "raid0", 00:10:25.690 "superblock": true, 00:10:25.690 "num_base_bdevs": 3, 00:10:25.690 "num_base_bdevs_discovered": 3, 00:10:25.690 "num_base_bdevs_operational": 3, 00:10:25.690 "base_bdevs_list": [ 00:10:25.690 { 00:10:25.690 "name": "BaseBdev1", 00:10:25.690 "uuid": "2209427a-85a1-c75c-8482-18c08f434316", 00:10:25.690 "is_configured": true, 00:10:25.690 "data_offset": 2048, 00:10:25.690 "data_size": 63488 00:10:25.690 }, 00:10:25.690 { 00:10:25.690 "name": "BaseBdev2", 00:10:25.690 "uuid": "acd56083-f641-e756-a127-9033a25650de", 00:10:25.690 "is_configured": true, 00:10:25.690 "data_offset": 2048, 00:10:25.690 "data_size": 63488 00:10:25.690 }, 00:10:25.690 { 00:10:25.690 "name": "BaseBdev3", 00:10:25.690 "uuid": "d5fc4205-3980-145e-a929-fbdb570fe64f", 00:10:25.690 "is_configured": true, 00:10:25.690 "data_offset": 2048, 00:10:25.690 "data_size": 63488 00:10:25.690 } 00:10:25.690 ] 00:10:25.690 }' 00:10:25.690 18:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:25.690 18:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.258 18:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:26.258 18:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:26.258 [2024-07-15 18:24:18.461461] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x375f362a0ec0 00:10:27.193 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:27.451 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.452 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.710 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.710 "name": "raid_bdev1", 00:10:27.710 "uuid": "7278bcea-42d7-11ef-9ade-d5fc5159efa5", 00:10:27.710 "strip_size_kb": 64, 00:10:27.710 "state": "online", 00:10:27.710 "raid_level": "raid0", 00:10:27.710 "superblock": true, 00:10:27.710 "num_base_bdevs": 3, 00:10:27.710 "num_base_bdevs_discovered": 3, 00:10:27.710 "num_base_bdevs_operational": 3, 00:10:27.710 "base_bdevs_list": [ 00:10:27.710 { 00:10:27.710 "name": "BaseBdev1", 00:10:27.710 "uuid": "2209427a-85a1-c75c-8482-18c08f434316", 00:10:27.710 "is_configured": true, 00:10:27.710 "data_offset": 2048, 00:10:27.710 "data_size": 63488 00:10:27.710 }, 00:10:27.710 { 00:10:27.710 "name": "BaseBdev2", 00:10:27.710 "uuid": "acd56083-f641-e756-a127-9033a25650de", 00:10:27.710 "is_configured": true, 00:10:27.710 "data_offset": 2048, 00:10:27.710 "data_size": 63488 00:10:27.710 }, 00:10:27.710 { 00:10:27.710 "name": "BaseBdev3", 00:10:27.710 "uuid": "d5fc4205-3980-145e-a929-fbdb570fe64f", 00:10:27.710 "is_configured": true, 00:10:27.710 "data_offset": 2048, 00:10:27.710 "data_size": 63488 00:10:27.710 } 00:10:27.710 ] 00:10:27.710 }' 00:10:27.710 18:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.710 18:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.968 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:28.226 [2024-07-15 18:24:20.561146] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.226 [2024-07-15 18:24:20.561175] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.226 [2024-07-15 18:24:20.561542] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.226 [2024-07-15 18:24:20.561552] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.226 [2024-07-15 18:24:20.561560] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.226 [2024-07-15 18:24:20.561564] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x375f36235400 name raid_bdev1, state offline 00:10:28.226 0 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53923 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 53923 ']' 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 53923 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53923 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:28.226 killing process with pid 53923 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53923' 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 53923 00:10:28.226 [2024-07-15 18:24:20.589790] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.226 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 53923 00:10:28.485 [2024-07-15 18:24:20.611872] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.04FUXLn1ae 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:10:28.485 00:10:28.485 real 0m6.671s 00:10:28.485 user 0m10.433s 00:10:28.485 sys 0m1.166s 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.485 18:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.485 ************************************ 00:10:28.485 END TEST raid_write_error_test 00:10:28.485 ************************************ 00:10:28.743 18:24:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:28.743 18:24:20 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:28.743 18:24:20 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:28.743 18:24:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:28.743 18:24:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.743 18:24:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.743 ************************************ 00:10:28.743 START TEST raid_state_function_test 00:10:28.743 ************************************ 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=54052 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54052' 00:10:28.743 Process raid pid: 54052 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 54052 /var/tmp/spdk-raid.sock 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 54052 ']' 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.743 18:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.743 [2024-07-15 18:24:20.901412] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:10:28.743 [2024-07-15 18:24:20.901687] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:29.309 EAL: TSC is not safe to use in SMP mode 00:10:29.309 EAL: TSC is not invariant 00:10:29.309 [2024-07-15 18:24:21.498198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.309 [2024-07-15 18:24:21.606039] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:29.309 [2024-07-15 18:24:21.608123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.309 [2024-07-15 18:24:21.608887] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.309 [2024-07-15 18:24:21.608901] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.567 18:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.568 18:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:29.568 18:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:29.826 [2024-07-15 18:24:22.200894] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.826 [2024-07-15 18:24:22.200949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.826 [2024-07-15 18:24:22.200954] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.826 [2024-07-15 18:24:22.200963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.826 [2024-07-15 18:24:22.200966] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.826 [2024-07-15 18:24:22.200974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.084 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:30.084 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.084 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:30.084 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.085 "name": "Existed_Raid", 00:10:30.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.085 "strip_size_kb": 64, 00:10:30.085 "state": "configuring", 00:10:30.085 "raid_level": "concat", 00:10:30.085 "superblock": false, 00:10:30.085 "num_base_bdevs": 3, 00:10:30.085 "num_base_bdevs_discovered": 0, 00:10:30.085 "num_base_bdevs_operational": 3, 00:10:30.085 "base_bdevs_list": [ 00:10:30.085 { 00:10:30.085 "name": "BaseBdev1", 00:10:30.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.085 "is_configured": false, 00:10:30.085 "data_offset": 0, 00:10:30.085 "data_size": 0 00:10:30.085 }, 00:10:30.085 { 00:10:30.085 "name": "BaseBdev2", 00:10:30.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.085 "is_configured": false, 00:10:30.085 "data_offset": 0, 00:10:30.085 "data_size": 0 00:10:30.085 }, 00:10:30.085 { 00:10:30.085 "name": "BaseBdev3", 00:10:30.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.085 "is_configured": false, 00:10:30.085 "data_offset": 0, 00:10:30.085 "data_size": 0 00:10:30.085 } 00:10:30.085 ] 00:10:30.085 }' 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.085 18:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.652 18:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:30.910 [2024-07-15 18:24:23.060913] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.910 [2024-07-15 18:24:23.060939] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e62ba434500 name Existed_Raid, state configuring 00:10:30.911 18:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:31.168 [2024-07-15 18:24:23.356935] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.168 [2024-07-15 18:24:23.356988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.168 [2024-07-15 18:24:23.356993] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.168 [2024-07-15 18:24:23.357001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.168 [2024-07-15 18:24:23.357005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.168 [2024-07-15 18:24:23.357012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.168 18:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.426 [2024-07-15 18:24:23.593986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.426 BaseBdev1 00:10:31.426 18:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:31.426 18:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:31.426 18:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:31.426 18:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:31.426 18:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:31.426 18:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:31.426 18:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:31.684 18:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.943 [ 00:10:31.943 { 00:10:31.943 "name": "BaseBdev1", 00:10:31.943 "aliases": [ 00:10:31.943 "75f15fb6-42d7-11ef-9ade-d5fc5159efa5" 00:10:31.943 ], 00:10:31.943 "product_name": "Malloc disk", 00:10:31.943 "block_size": 512, 00:10:31.943 "num_blocks": 65536, 00:10:31.943 "uuid": "75f15fb6-42d7-11ef-9ade-d5fc5159efa5", 00:10:31.943 "assigned_rate_limits": { 00:10:31.943 "rw_ios_per_sec": 0, 00:10:31.943 "rw_mbytes_per_sec": 0, 00:10:31.943 "r_mbytes_per_sec": 0, 00:10:31.943 "w_mbytes_per_sec": 0 00:10:31.943 }, 00:10:31.943 "claimed": true, 00:10:31.943 "claim_type": "exclusive_write", 00:10:31.943 "zoned": false, 00:10:31.943 "supported_io_types": { 00:10:31.943 "read": true, 00:10:31.943 "write": true, 00:10:31.943 "unmap": true, 00:10:31.943 "flush": true, 00:10:31.943 "reset": true, 00:10:31.943 "nvme_admin": false, 00:10:31.943 "nvme_io": false, 00:10:31.943 "nvme_io_md": false, 00:10:31.943 "write_zeroes": true, 00:10:31.943 "zcopy": true, 00:10:31.943 "get_zone_info": false, 00:10:31.943 "zone_management": false, 00:10:31.943 "zone_append": false, 00:10:31.943 "compare": false, 00:10:31.943 "compare_and_write": false, 00:10:31.943 "abort": true, 00:10:31.943 "seek_hole": false, 00:10:31.943 "seek_data": false, 00:10:31.943 "copy": true, 00:10:31.943 "nvme_iov_md": false 00:10:31.943 }, 00:10:31.943 "memory_domains": [ 00:10:31.943 { 00:10:31.943 "dma_device_id": "system", 00:10:31.943 "dma_device_type": 1 00:10:31.943 }, 00:10:31.943 { 00:10:31.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.943 "dma_device_type": 2 00:10:31.943 } 00:10:31.943 ], 00:10:31.943 "driver_specific": {} 00:10:31.943 } 00:10:31.943 ] 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.943 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.202 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.202 "name": "Existed_Raid", 00:10:32.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.202 "strip_size_kb": 64, 00:10:32.202 "state": "configuring", 00:10:32.202 "raid_level": "concat", 00:10:32.202 "superblock": false, 00:10:32.202 "num_base_bdevs": 3, 00:10:32.202 "num_base_bdevs_discovered": 1, 00:10:32.202 "num_base_bdevs_operational": 3, 00:10:32.202 "base_bdevs_list": [ 00:10:32.202 { 00:10:32.202 "name": "BaseBdev1", 00:10:32.202 "uuid": "75f15fb6-42d7-11ef-9ade-d5fc5159efa5", 00:10:32.202 "is_configured": true, 00:10:32.202 "data_offset": 0, 00:10:32.202 "data_size": 65536 00:10:32.202 }, 00:10:32.202 { 00:10:32.202 "name": "BaseBdev2", 00:10:32.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.202 "is_configured": false, 00:10:32.202 "data_offset": 0, 00:10:32.202 "data_size": 0 00:10:32.202 }, 00:10:32.202 { 00:10:32.202 "name": "BaseBdev3", 00:10:32.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.202 "is_configured": false, 00:10:32.202 "data_offset": 0, 00:10:32.202 "data_size": 0 00:10:32.202 } 00:10:32.202 ] 00:10:32.202 }' 00:10:32.202 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.202 18:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.461 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:32.720 [2024-07-15 18:24:24.924997] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.720 [2024-07-15 18:24:24.925032] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e62ba434500 name Existed_Raid, state configuring 00:10:32.720 18:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:32.979 [2024-07-15 18:24:25.189020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.979 [2024-07-15 18:24:25.189853] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.979 [2024-07-15 18:24:25.189892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.979 [2024-07-15 18:24:25.189897] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.979 [2024-07-15 18:24:25.189906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.979 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.238 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.238 "name": "Existed_Raid", 00:10:33.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.238 "strip_size_kb": 64, 00:10:33.238 "state": "configuring", 00:10:33.238 "raid_level": "concat", 00:10:33.238 "superblock": false, 00:10:33.238 "num_base_bdevs": 3, 00:10:33.238 "num_base_bdevs_discovered": 1, 00:10:33.238 "num_base_bdevs_operational": 3, 00:10:33.238 "base_bdevs_list": [ 00:10:33.238 { 00:10:33.238 "name": "BaseBdev1", 00:10:33.238 "uuid": "75f15fb6-42d7-11ef-9ade-d5fc5159efa5", 00:10:33.238 "is_configured": true, 00:10:33.238 "data_offset": 0, 00:10:33.238 "data_size": 65536 00:10:33.238 }, 00:10:33.238 { 00:10:33.238 "name": "BaseBdev2", 00:10:33.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.238 "is_configured": false, 00:10:33.238 "data_offset": 0, 00:10:33.238 "data_size": 0 00:10:33.238 }, 00:10:33.238 { 00:10:33.238 "name": "BaseBdev3", 00:10:33.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.238 "is_configured": false, 00:10:33.238 "data_offset": 0, 00:10:33.238 "data_size": 0 00:10:33.238 } 00:10:33.238 ] 00:10:33.238 }' 00:10:33.238 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.238 18:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.497 18:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.756 [2024-07-15 18:24:26.105200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.756 BaseBdev2 00:10:33.756 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:33.756 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:33.756 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:33.756 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:33.756 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:33.756 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:33.756 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:34.323 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.323 [ 00:10:34.323 { 00:10:34.323 "name": "BaseBdev2", 00:10:34.323 "aliases": [ 00:10:34.323 "7770b0f5-42d7-11ef-9ade-d5fc5159efa5" 00:10:34.323 ], 00:10:34.323 "product_name": "Malloc disk", 00:10:34.323 "block_size": 512, 00:10:34.323 "num_blocks": 65536, 00:10:34.323 "uuid": "7770b0f5-42d7-11ef-9ade-d5fc5159efa5", 00:10:34.323 "assigned_rate_limits": { 00:10:34.323 "rw_ios_per_sec": 0, 00:10:34.323 "rw_mbytes_per_sec": 0, 00:10:34.323 "r_mbytes_per_sec": 0, 00:10:34.323 "w_mbytes_per_sec": 0 00:10:34.323 }, 00:10:34.323 "claimed": true, 00:10:34.323 "claim_type": "exclusive_write", 00:10:34.323 "zoned": false, 00:10:34.323 "supported_io_types": { 00:10:34.323 "read": true, 00:10:34.323 "write": true, 00:10:34.323 "unmap": true, 00:10:34.323 "flush": true, 00:10:34.323 "reset": true, 00:10:34.323 "nvme_admin": false, 00:10:34.323 "nvme_io": false, 00:10:34.323 "nvme_io_md": false, 00:10:34.323 "write_zeroes": true, 00:10:34.323 "zcopy": true, 00:10:34.323 "get_zone_info": false, 00:10:34.323 "zone_management": false, 00:10:34.323 "zone_append": false, 00:10:34.323 "compare": false, 00:10:34.323 "compare_and_write": false, 00:10:34.323 "abort": true, 00:10:34.323 "seek_hole": false, 00:10:34.323 "seek_data": false, 00:10:34.323 "copy": true, 00:10:34.323 "nvme_iov_md": false 00:10:34.323 }, 00:10:34.323 "memory_domains": [ 00:10:34.323 { 00:10:34.323 "dma_device_id": "system", 00:10:34.323 "dma_device_type": 1 00:10:34.323 }, 00:10:34.323 { 00:10:34.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.323 "dma_device_type": 2 00:10:34.323 } 00:10:34.323 ], 00:10:34.323 "driver_specific": {} 00:10:34.323 } 00:10:34.323 ] 00:10:34.323 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:34.323 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:34.323 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:34.323 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:34.323 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.324 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.582 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.582 "name": "Existed_Raid", 00:10:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.582 "strip_size_kb": 64, 00:10:34.582 "state": "configuring", 00:10:34.582 "raid_level": "concat", 00:10:34.582 "superblock": false, 00:10:34.582 "num_base_bdevs": 3, 00:10:34.582 "num_base_bdevs_discovered": 2, 00:10:34.582 "num_base_bdevs_operational": 3, 00:10:34.582 "base_bdevs_list": [ 00:10:34.582 { 00:10:34.582 "name": "BaseBdev1", 00:10:34.582 "uuid": "75f15fb6-42d7-11ef-9ade-d5fc5159efa5", 00:10:34.582 "is_configured": true, 00:10:34.582 "data_offset": 0, 00:10:34.582 "data_size": 65536 00:10:34.582 }, 00:10:34.582 { 00:10:34.582 "name": "BaseBdev2", 00:10:34.582 "uuid": "7770b0f5-42d7-11ef-9ade-d5fc5159efa5", 00:10:34.582 "is_configured": true, 00:10:34.582 "data_offset": 0, 00:10:34.582 "data_size": 65536 00:10:34.582 }, 00:10:34.582 { 00:10:34.582 "name": "BaseBdev3", 00:10:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.582 "is_configured": false, 00:10:34.582 "data_offset": 0, 00:10:34.582 "data_size": 0 00:10:34.582 } 00:10:34.582 ] 00:10:34.582 }' 00:10:34.582 18:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.582 18:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.150 18:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.150 [2024-07-15 18:24:27.513260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.150 [2024-07-15 18:24:27.513290] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e62ba434a00 00:10:35.150 [2024-07-15 18:24:27.513295] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:35.150 [2024-07-15 18:24:27.513316] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e62ba497e20 00:10:35.150 [2024-07-15 18:24:27.513423] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e62ba434a00 00:10:35.150 [2024-07-15 18:24:27.513427] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2e62ba434a00 00:10:35.150 [2024-07-15 18:24:27.513460] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.150 BaseBdev3 00:10:35.150 18:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:35.150 18:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:35.150 18:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:35.150 18:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:35.150 18:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:35.150 18:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:35.150 18:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:35.719 18:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.719 [ 00:10:35.719 { 00:10:35.719 "name": "BaseBdev3", 00:10:35.719 "aliases": [ 00:10:35.719 "78478bd2-42d7-11ef-9ade-d5fc5159efa5" 00:10:35.719 ], 00:10:35.719 "product_name": "Malloc disk", 00:10:35.719 "block_size": 512, 00:10:35.719 "num_blocks": 65536, 00:10:35.719 "uuid": "78478bd2-42d7-11ef-9ade-d5fc5159efa5", 00:10:35.719 "assigned_rate_limits": { 00:10:35.719 "rw_ios_per_sec": 0, 00:10:35.719 "rw_mbytes_per_sec": 0, 00:10:35.719 "r_mbytes_per_sec": 0, 00:10:35.719 "w_mbytes_per_sec": 0 00:10:35.719 }, 00:10:35.719 "claimed": true, 00:10:35.719 "claim_type": "exclusive_write", 00:10:35.719 "zoned": false, 00:10:35.719 "supported_io_types": { 00:10:35.719 "read": true, 00:10:35.719 "write": true, 00:10:35.719 "unmap": true, 00:10:35.719 "flush": true, 00:10:35.719 "reset": true, 00:10:35.719 "nvme_admin": false, 00:10:35.719 "nvme_io": false, 00:10:35.719 "nvme_io_md": false, 00:10:35.719 "write_zeroes": true, 00:10:35.719 "zcopy": true, 00:10:35.719 "get_zone_info": false, 00:10:35.719 "zone_management": false, 00:10:35.719 "zone_append": false, 00:10:35.719 "compare": false, 00:10:35.719 "compare_and_write": false, 00:10:35.719 "abort": true, 00:10:35.719 "seek_hole": false, 00:10:35.719 "seek_data": false, 00:10:35.719 "copy": true, 00:10:35.719 "nvme_iov_md": false 00:10:35.719 }, 00:10:35.719 "memory_domains": [ 00:10:35.719 { 00:10:35.719 "dma_device_id": "system", 00:10:35.719 "dma_device_type": 1 00:10:35.719 }, 00:10:35.719 { 00:10:35.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.719 "dma_device_type": 2 00:10:35.719 } 00:10:35.719 ], 00:10:35.719 "driver_specific": {} 00:10:35.719 } 00:10:35.719 ] 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.719 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.977 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.977 "name": "Existed_Raid", 00:10:35.977 "uuid": "7847924c-42d7-11ef-9ade-d5fc5159efa5", 00:10:35.977 "strip_size_kb": 64, 00:10:35.977 "state": "online", 00:10:35.977 "raid_level": "concat", 00:10:35.977 "superblock": false, 00:10:35.977 "num_base_bdevs": 3, 00:10:35.977 "num_base_bdevs_discovered": 3, 00:10:35.977 "num_base_bdevs_operational": 3, 00:10:35.977 "base_bdevs_list": [ 00:10:35.977 { 00:10:35.977 "name": "BaseBdev1", 00:10:35.977 "uuid": "75f15fb6-42d7-11ef-9ade-d5fc5159efa5", 00:10:35.978 "is_configured": true, 00:10:35.978 "data_offset": 0, 00:10:35.978 "data_size": 65536 00:10:35.978 }, 00:10:35.978 { 00:10:35.978 "name": "BaseBdev2", 00:10:35.978 "uuid": "7770b0f5-42d7-11ef-9ade-d5fc5159efa5", 00:10:35.978 "is_configured": true, 00:10:35.978 "data_offset": 0, 00:10:35.978 "data_size": 65536 00:10:35.978 }, 00:10:35.978 { 00:10:35.978 "name": "BaseBdev3", 00:10:35.978 "uuid": "78478bd2-42d7-11ef-9ade-d5fc5159efa5", 00:10:35.978 "is_configured": true, 00:10:35.978 "data_offset": 0, 00:10:35.978 "data_size": 65536 00:10:35.978 } 00:10:35.978 ] 00:10:35.978 }' 00:10:35.978 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.978 18:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:36.544 [2024-07-15 18:24:28.901223] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.544 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:36.544 "name": "Existed_Raid", 00:10:36.544 "aliases": [ 00:10:36.544 "7847924c-42d7-11ef-9ade-d5fc5159efa5" 00:10:36.544 ], 00:10:36.544 "product_name": "Raid Volume", 00:10:36.544 "block_size": 512, 00:10:36.544 "num_blocks": 196608, 00:10:36.544 "uuid": "7847924c-42d7-11ef-9ade-d5fc5159efa5", 00:10:36.544 "assigned_rate_limits": { 00:10:36.544 "rw_ios_per_sec": 0, 00:10:36.544 "rw_mbytes_per_sec": 0, 00:10:36.545 "r_mbytes_per_sec": 0, 00:10:36.545 "w_mbytes_per_sec": 0 00:10:36.545 }, 00:10:36.545 "claimed": false, 00:10:36.545 "zoned": false, 00:10:36.545 "supported_io_types": { 00:10:36.545 "read": true, 00:10:36.545 "write": true, 00:10:36.545 "unmap": true, 00:10:36.545 "flush": true, 00:10:36.545 "reset": true, 00:10:36.545 "nvme_admin": false, 00:10:36.545 "nvme_io": false, 00:10:36.545 "nvme_io_md": false, 00:10:36.545 "write_zeroes": true, 00:10:36.545 "zcopy": false, 00:10:36.545 "get_zone_info": false, 00:10:36.545 "zone_management": false, 00:10:36.545 "zone_append": false, 00:10:36.545 "compare": false, 00:10:36.545 "compare_and_write": false, 00:10:36.545 "abort": false, 00:10:36.545 "seek_hole": false, 00:10:36.545 "seek_data": false, 00:10:36.545 "copy": false, 00:10:36.545 "nvme_iov_md": false 00:10:36.545 }, 00:10:36.545 "memory_domains": [ 00:10:36.545 { 00:10:36.545 "dma_device_id": "system", 00:10:36.545 "dma_device_type": 1 00:10:36.545 }, 00:10:36.545 { 00:10:36.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.545 "dma_device_type": 2 00:10:36.545 }, 00:10:36.545 { 00:10:36.545 "dma_device_id": "system", 00:10:36.545 "dma_device_type": 1 00:10:36.545 }, 00:10:36.545 { 00:10:36.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.545 "dma_device_type": 2 00:10:36.545 }, 00:10:36.545 { 00:10:36.545 "dma_device_id": "system", 00:10:36.545 "dma_device_type": 1 00:10:36.545 }, 00:10:36.545 { 00:10:36.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.545 "dma_device_type": 2 00:10:36.545 } 00:10:36.545 ], 00:10:36.545 "driver_specific": { 00:10:36.545 "raid": { 00:10:36.545 "uuid": "7847924c-42d7-11ef-9ade-d5fc5159efa5", 00:10:36.545 "strip_size_kb": 64, 00:10:36.545 "state": "online", 00:10:36.545 "raid_level": "concat", 00:10:36.545 "superblock": false, 00:10:36.545 "num_base_bdevs": 3, 00:10:36.545 "num_base_bdevs_discovered": 3, 00:10:36.545 "num_base_bdevs_operational": 3, 00:10:36.545 "base_bdevs_list": [ 00:10:36.545 { 00:10:36.545 "name": "BaseBdev1", 00:10:36.545 "uuid": "75f15fb6-42d7-11ef-9ade-d5fc5159efa5", 00:10:36.545 "is_configured": true, 00:10:36.545 "data_offset": 0, 00:10:36.545 "data_size": 65536 00:10:36.545 }, 00:10:36.545 { 00:10:36.545 "name": "BaseBdev2", 00:10:36.545 "uuid": "7770b0f5-42d7-11ef-9ade-d5fc5159efa5", 00:10:36.545 "is_configured": true, 00:10:36.545 "data_offset": 0, 00:10:36.545 "data_size": 65536 00:10:36.545 }, 00:10:36.545 { 00:10:36.545 "name": "BaseBdev3", 00:10:36.545 "uuid": "78478bd2-42d7-11ef-9ade-d5fc5159efa5", 00:10:36.545 "is_configured": true, 00:10:36.545 "data_offset": 0, 00:10:36.545 "data_size": 65536 00:10:36.545 } 00:10:36.545 ] 00:10:36.545 } 00:10:36.545 } 00:10:36.545 }' 00:10:36.545 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.545 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:36.545 BaseBdev2 00:10:36.545 BaseBdev3' 00:10:36.545 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:36.802 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:36.802 18:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:36.802 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:36.802 "name": "BaseBdev1", 00:10:36.802 "aliases": [ 00:10:36.802 "75f15fb6-42d7-11ef-9ade-d5fc5159efa5" 00:10:36.802 ], 00:10:36.802 "product_name": "Malloc disk", 00:10:36.802 "block_size": 512, 00:10:36.802 "num_blocks": 65536, 00:10:36.802 "uuid": "75f15fb6-42d7-11ef-9ade-d5fc5159efa5", 00:10:36.802 "assigned_rate_limits": { 00:10:36.802 "rw_ios_per_sec": 0, 00:10:36.802 "rw_mbytes_per_sec": 0, 00:10:36.802 "r_mbytes_per_sec": 0, 00:10:36.802 "w_mbytes_per_sec": 0 00:10:36.802 }, 00:10:36.802 "claimed": true, 00:10:36.802 "claim_type": "exclusive_write", 00:10:36.802 "zoned": false, 00:10:36.802 "supported_io_types": { 00:10:36.802 "read": true, 00:10:36.802 "write": true, 00:10:36.802 "unmap": true, 00:10:36.802 "flush": true, 00:10:36.802 "reset": true, 00:10:36.802 "nvme_admin": false, 00:10:36.802 "nvme_io": false, 00:10:36.802 "nvme_io_md": false, 00:10:36.802 "write_zeroes": true, 00:10:36.802 "zcopy": true, 00:10:36.802 "get_zone_info": false, 00:10:36.802 "zone_management": false, 00:10:36.802 "zone_append": false, 00:10:36.802 "compare": false, 00:10:36.802 "compare_and_write": false, 00:10:36.802 "abort": true, 00:10:36.802 "seek_hole": false, 00:10:36.802 "seek_data": false, 00:10:36.802 "copy": true, 00:10:36.802 "nvme_iov_md": false 00:10:36.802 }, 00:10:36.802 "memory_domains": [ 00:10:36.802 { 00:10:36.802 "dma_device_id": "system", 00:10:36.803 "dma_device_type": 1 00:10:36.803 }, 00:10:36.803 { 00:10:36.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.803 "dma_device_type": 2 00:10:36.803 } 00:10:36.803 ], 00:10:36.803 "driver_specific": {} 00:10:36.803 }' 00:10:36.803 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:37.061 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:37.320 "name": "BaseBdev2", 00:10:37.320 "aliases": [ 00:10:37.320 "7770b0f5-42d7-11ef-9ade-d5fc5159efa5" 00:10:37.320 ], 00:10:37.320 "product_name": "Malloc disk", 00:10:37.320 "block_size": 512, 00:10:37.320 "num_blocks": 65536, 00:10:37.320 "uuid": "7770b0f5-42d7-11ef-9ade-d5fc5159efa5", 00:10:37.320 "assigned_rate_limits": { 00:10:37.320 "rw_ios_per_sec": 0, 00:10:37.320 "rw_mbytes_per_sec": 0, 00:10:37.320 "r_mbytes_per_sec": 0, 00:10:37.320 "w_mbytes_per_sec": 0 00:10:37.320 }, 00:10:37.320 "claimed": true, 00:10:37.320 "claim_type": "exclusive_write", 00:10:37.320 "zoned": false, 00:10:37.320 "supported_io_types": { 00:10:37.320 "read": true, 00:10:37.320 "write": true, 00:10:37.320 "unmap": true, 00:10:37.320 "flush": true, 00:10:37.320 "reset": true, 00:10:37.320 "nvme_admin": false, 00:10:37.320 "nvme_io": false, 00:10:37.320 "nvme_io_md": false, 00:10:37.320 "write_zeroes": true, 00:10:37.320 "zcopy": true, 00:10:37.320 "get_zone_info": false, 00:10:37.320 "zone_management": false, 00:10:37.320 "zone_append": false, 00:10:37.320 "compare": false, 00:10:37.320 "compare_and_write": false, 00:10:37.320 "abort": true, 00:10:37.320 "seek_hole": false, 00:10:37.320 "seek_data": false, 00:10:37.320 "copy": true, 00:10:37.320 "nvme_iov_md": false 00:10:37.320 }, 00:10:37.320 "memory_domains": [ 00:10:37.320 { 00:10:37.320 "dma_device_id": "system", 00:10:37.320 "dma_device_type": 1 00:10:37.320 }, 00:10:37.320 { 00:10:37.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.320 "dma_device_type": 2 00:10:37.320 } 00:10:37.320 ], 00:10:37.320 "driver_specific": {} 00:10:37.320 }' 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.320 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.321 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.321 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.321 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.321 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.321 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:37.321 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:37.321 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:37.579 "name": "BaseBdev3", 00:10:37.579 "aliases": [ 00:10:37.579 "78478bd2-42d7-11ef-9ade-d5fc5159efa5" 00:10:37.579 ], 00:10:37.579 "product_name": "Malloc disk", 00:10:37.579 "block_size": 512, 00:10:37.579 "num_blocks": 65536, 00:10:37.579 "uuid": "78478bd2-42d7-11ef-9ade-d5fc5159efa5", 00:10:37.579 "assigned_rate_limits": { 00:10:37.579 "rw_ios_per_sec": 0, 00:10:37.579 "rw_mbytes_per_sec": 0, 00:10:37.579 "r_mbytes_per_sec": 0, 00:10:37.579 "w_mbytes_per_sec": 0 00:10:37.579 }, 00:10:37.579 "claimed": true, 00:10:37.579 "claim_type": "exclusive_write", 00:10:37.579 "zoned": false, 00:10:37.579 "supported_io_types": { 00:10:37.579 "read": true, 00:10:37.579 "write": true, 00:10:37.579 "unmap": true, 00:10:37.579 "flush": true, 00:10:37.579 "reset": true, 00:10:37.579 "nvme_admin": false, 00:10:37.579 "nvme_io": false, 00:10:37.579 "nvme_io_md": false, 00:10:37.579 "write_zeroes": true, 00:10:37.579 "zcopy": true, 00:10:37.579 "get_zone_info": false, 00:10:37.579 "zone_management": false, 00:10:37.579 "zone_append": false, 00:10:37.579 "compare": false, 00:10:37.579 "compare_and_write": false, 00:10:37.579 "abort": true, 00:10:37.579 "seek_hole": false, 00:10:37.579 "seek_data": false, 00:10:37.579 "copy": true, 00:10:37.579 "nvme_iov_md": false 00:10:37.579 }, 00:10:37.579 "memory_domains": [ 00:10:37.579 { 00:10:37.579 "dma_device_id": "system", 00:10:37.579 "dma_device_type": 1 00:10:37.579 }, 00:10:37.579 { 00:10:37.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.579 "dma_device_type": 2 00:10:37.579 } 00:10:37.579 ], 00:10:37.579 "driver_specific": {} 00:10:37.579 }' 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.579 18:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:37.878 [2024-07-15 18:24:30.061248] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.878 [2024-07-15 18:24:30.061272] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.878 [2024-07-15 18:24:30.061303] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.878 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.191 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:38.191 "name": "Existed_Raid", 00:10:38.191 "uuid": "7847924c-42d7-11ef-9ade-d5fc5159efa5", 00:10:38.191 "strip_size_kb": 64, 00:10:38.191 "state": "offline", 00:10:38.191 "raid_level": "concat", 00:10:38.191 "superblock": false, 00:10:38.191 "num_base_bdevs": 3, 00:10:38.191 "num_base_bdevs_discovered": 2, 00:10:38.191 "num_base_bdevs_operational": 2, 00:10:38.191 "base_bdevs_list": [ 00:10:38.191 { 00:10:38.191 "name": null, 00:10:38.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.191 "is_configured": false, 00:10:38.191 "data_offset": 0, 00:10:38.191 "data_size": 65536 00:10:38.191 }, 00:10:38.191 { 00:10:38.191 "name": "BaseBdev2", 00:10:38.191 "uuid": "7770b0f5-42d7-11ef-9ade-d5fc5159efa5", 00:10:38.191 "is_configured": true, 00:10:38.191 "data_offset": 0, 00:10:38.191 "data_size": 65536 00:10:38.191 }, 00:10:38.191 { 00:10:38.191 "name": "BaseBdev3", 00:10:38.191 "uuid": "78478bd2-42d7-11ef-9ade-d5fc5159efa5", 00:10:38.191 "is_configured": true, 00:10:38.191 "data_offset": 0, 00:10:38.191 "data_size": 65536 00:10:38.191 } 00:10:38.191 ] 00:10:38.191 }' 00:10:38.191 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:38.191 18:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.499 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:38.499 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:38.499 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.499 18:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:38.756 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:38.756 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.756 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:39.013 [2024-07-15 18:24:31.255222] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.013 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:39.013 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:39.013 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.013 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:39.271 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:39.271 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.271 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:39.530 [2024-07-15 18:24:31.779735] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.530 [2024-07-15 18:24:31.779768] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e62ba434a00 name Existed_Raid, state offline 00:10:39.530 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:39.530 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:39.530 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:39.530 18:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.789 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:39.790 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:39.790 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:39.790 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:39.790 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:39.790 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.048 BaseBdev2 00:10:40.048 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:40.048 18:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:40.048 18:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:40.048 18:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:40.048 18:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:40.048 18:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:40.048 18:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:40.305 18:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.563 [ 00:10:40.563 { 00:10:40.563 "name": "BaseBdev2", 00:10:40.563 "aliases": [ 00:10:40.563 "7b2161df-42d7-11ef-9ade-d5fc5159efa5" 00:10:40.563 ], 00:10:40.563 "product_name": "Malloc disk", 00:10:40.563 "block_size": 512, 00:10:40.563 "num_blocks": 65536, 00:10:40.563 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:40.563 "assigned_rate_limits": { 00:10:40.563 "rw_ios_per_sec": 0, 00:10:40.563 "rw_mbytes_per_sec": 0, 00:10:40.563 "r_mbytes_per_sec": 0, 00:10:40.563 "w_mbytes_per_sec": 0 00:10:40.563 }, 00:10:40.563 "claimed": false, 00:10:40.563 "zoned": false, 00:10:40.563 "supported_io_types": { 00:10:40.563 "read": true, 00:10:40.563 "write": true, 00:10:40.563 "unmap": true, 00:10:40.563 "flush": true, 00:10:40.563 "reset": true, 00:10:40.563 "nvme_admin": false, 00:10:40.563 "nvme_io": false, 00:10:40.563 "nvme_io_md": false, 00:10:40.563 "write_zeroes": true, 00:10:40.563 "zcopy": true, 00:10:40.563 "get_zone_info": false, 00:10:40.563 "zone_management": false, 00:10:40.563 "zone_append": false, 00:10:40.563 "compare": false, 00:10:40.563 "compare_and_write": false, 00:10:40.563 "abort": true, 00:10:40.563 "seek_hole": false, 00:10:40.563 "seek_data": false, 00:10:40.563 "copy": true, 00:10:40.563 "nvme_iov_md": false 00:10:40.563 }, 00:10:40.563 "memory_domains": [ 00:10:40.563 { 00:10:40.563 "dma_device_id": "system", 00:10:40.563 "dma_device_type": 1 00:10:40.563 }, 00:10:40.563 { 00:10:40.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.563 "dma_device_type": 2 00:10:40.563 } 00:10:40.563 ], 00:10:40.563 "driver_specific": {} 00:10:40.563 } 00:10:40.563 ] 00:10:40.563 18:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:40.563 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:40.563 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:40.563 18:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.821 BaseBdev3 00:10:40.821 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:40.821 18:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:40.821 18:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:40.821 18:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:40.821 18:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:40.821 18:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:40.821 18:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:41.078 18:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.336 [ 00:10:41.336 { 00:10:41.336 "name": "BaseBdev3", 00:10:41.336 "aliases": [ 00:10:41.336 "7b92e96d-42d7-11ef-9ade-d5fc5159efa5" 00:10:41.336 ], 00:10:41.336 "product_name": "Malloc disk", 00:10:41.336 "block_size": 512, 00:10:41.336 "num_blocks": 65536, 00:10:41.336 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:41.336 "assigned_rate_limits": { 00:10:41.336 "rw_ios_per_sec": 0, 00:10:41.336 "rw_mbytes_per_sec": 0, 00:10:41.336 "r_mbytes_per_sec": 0, 00:10:41.336 "w_mbytes_per_sec": 0 00:10:41.336 }, 00:10:41.336 "claimed": false, 00:10:41.336 "zoned": false, 00:10:41.336 "supported_io_types": { 00:10:41.336 "read": true, 00:10:41.336 "write": true, 00:10:41.336 "unmap": true, 00:10:41.336 "flush": true, 00:10:41.336 "reset": true, 00:10:41.336 "nvme_admin": false, 00:10:41.336 "nvme_io": false, 00:10:41.336 "nvme_io_md": false, 00:10:41.336 "write_zeroes": true, 00:10:41.336 "zcopy": true, 00:10:41.336 "get_zone_info": false, 00:10:41.336 "zone_management": false, 00:10:41.336 "zone_append": false, 00:10:41.336 "compare": false, 00:10:41.336 "compare_and_write": false, 00:10:41.336 "abort": true, 00:10:41.336 "seek_hole": false, 00:10:41.336 "seek_data": false, 00:10:41.336 "copy": true, 00:10:41.336 "nvme_iov_md": false 00:10:41.336 }, 00:10:41.336 "memory_domains": [ 00:10:41.336 { 00:10:41.336 "dma_device_id": "system", 00:10:41.336 "dma_device_type": 1 00:10:41.336 }, 00:10:41.336 { 00:10:41.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.336 "dma_device_type": 2 00:10:41.336 } 00:10:41.336 ], 00:10:41.336 "driver_specific": {} 00:10:41.336 } 00:10:41.336 ] 00:10:41.336 18:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:41.336 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:41.336 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:41.336 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:41.631 [2024-07-15 18:24:33.752253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.631 [2024-07-15 18:24:33.752307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.631 [2024-07-15 18:24:33.752316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.631 [2024-07-15 18:24:33.752900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.631 18:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.907 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.907 "name": "Existed_Raid", 00:10:41.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.907 "strip_size_kb": 64, 00:10:41.907 "state": "configuring", 00:10:41.907 "raid_level": "concat", 00:10:41.907 "superblock": false, 00:10:41.907 "num_base_bdevs": 3, 00:10:41.907 "num_base_bdevs_discovered": 2, 00:10:41.907 "num_base_bdevs_operational": 3, 00:10:41.907 "base_bdevs_list": [ 00:10:41.907 { 00:10:41.907 "name": "BaseBdev1", 00:10:41.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.907 "is_configured": false, 00:10:41.907 "data_offset": 0, 00:10:41.907 "data_size": 0 00:10:41.907 }, 00:10:41.907 { 00:10:41.907 "name": "BaseBdev2", 00:10:41.907 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:41.907 "is_configured": true, 00:10:41.907 "data_offset": 0, 00:10:41.907 "data_size": 65536 00:10:41.907 }, 00:10:41.907 { 00:10:41.907 "name": "BaseBdev3", 00:10:41.907 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:41.907 "is_configured": true, 00:10:41.907 "data_offset": 0, 00:10:41.907 "data_size": 65536 00:10:41.907 } 00:10:41.907 ] 00:10:41.907 }' 00:10:41.907 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.907 18:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.166 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:42.424 [2024-07-15 18:24:34.632297] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:42.424 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:42.425 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:42.425 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.425 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.683 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.683 "name": "Existed_Raid", 00:10:42.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.683 "strip_size_kb": 64, 00:10:42.683 "state": "configuring", 00:10:42.683 "raid_level": "concat", 00:10:42.683 "superblock": false, 00:10:42.683 "num_base_bdevs": 3, 00:10:42.683 "num_base_bdevs_discovered": 1, 00:10:42.683 "num_base_bdevs_operational": 3, 00:10:42.683 "base_bdevs_list": [ 00:10:42.683 { 00:10:42.683 "name": "BaseBdev1", 00:10:42.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.683 "is_configured": false, 00:10:42.683 "data_offset": 0, 00:10:42.683 "data_size": 0 00:10:42.683 }, 00:10:42.683 { 00:10:42.683 "name": null, 00:10:42.683 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:42.683 "is_configured": false, 00:10:42.683 "data_offset": 0, 00:10:42.683 "data_size": 65536 00:10:42.683 }, 00:10:42.683 { 00:10:42.683 "name": "BaseBdev3", 00:10:42.683 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:42.683 "is_configured": true, 00:10:42.683 "data_offset": 0, 00:10:42.683 "data_size": 65536 00:10:42.683 } 00:10:42.683 ] 00:10:42.683 }' 00:10:42.683 18:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.683 18:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.942 18:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.942 18:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.200 18:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:43.200 18:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.459 [2024-07-15 18:24:35.772505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.459 BaseBdev1 00:10:43.459 18:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:43.459 18:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:43.459 18:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:43.459 18:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:43.459 18:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:43.459 18:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:43.459 18:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:43.717 18:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.976 [ 00:10:43.976 { 00:10:43.976 "name": "BaseBdev1", 00:10:43.976 "aliases": [ 00:10:43.976 "7d33ce68-42d7-11ef-9ade-d5fc5159efa5" 00:10:43.976 ], 00:10:43.976 "product_name": "Malloc disk", 00:10:43.976 "block_size": 512, 00:10:43.976 "num_blocks": 65536, 00:10:43.976 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:43.976 "assigned_rate_limits": { 00:10:43.976 "rw_ios_per_sec": 0, 00:10:43.976 "rw_mbytes_per_sec": 0, 00:10:43.976 "r_mbytes_per_sec": 0, 00:10:43.976 "w_mbytes_per_sec": 0 00:10:43.976 }, 00:10:43.976 "claimed": true, 00:10:43.976 "claim_type": "exclusive_write", 00:10:43.976 "zoned": false, 00:10:43.976 "supported_io_types": { 00:10:43.976 "read": true, 00:10:43.976 "write": true, 00:10:43.976 "unmap": true, 00:10:43.976 "flush": true, 00:10:43.976 "reset": true, 00:10:43.976 "nvme_admin": false, 00:10:43.976 "nvme_io": false, 00:10:43.976 "nvme_io_md": false, 00:10:43.976 "write_zeroes": true, 00:10:43.976 "zcopy": true, 00:10:43.976 "get_zone_info": false, 00:10:43.976 "zone_management": false, 00:10:43.976 "zone_append": false, 00:10:43.976 "compare": false, 00:10:43.976 "compare_and_write": false, 00:10:43.976 "abort": true, 00:10:43.976 "seek_hole": false, 00:10:43.976 "seek_data": false, 00:10:43.976 "copy": true, 00:10:43.976 "nvme_iov_md": false 00:10:43.976 }, 00:10:43.976 "memory_domains": [ 00:10:43.976 { 00:10:43.976 "dma_device_id": "system", 00:10:43.976 "dma_device_type": 1 00:10:43.976 }, 00:10:43.976 { 00:10:43.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.976 "dma_device_type": 2 00:10:43.976 } 00:10:43.976 ], 00:10:43.976 "driver_specific": {} 00:10:43.976 } 00:10:43.976 ] 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.976 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.234 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:44.234 "name": "Existed_Raid", 00:10:44.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.234 "strip_size_kb": 64, 00:10:44.234 "state": "configuring", 00:10:44.234 "raid_level": "concat", 00:10:44.234 "superblock": false, 00:10:44.234 "num_base_bdevs": 3, 00:10:44.234 "num_base_bdevs_discovered": 2, 00:10:44.234 "num_base_bdevs_operational": 3, 00:10:44.234 "base_bdevs_list": [ 00:10:44.234 { 00:10:44.234 "name": "BaseBdev1", 00:10:44.234 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:44.234 "is_configured": true, 00:10:44.234 "data_offset": 0, 00:10:44.234 "data_size": 65536 00:10:44.234 }, 00:10:44.234 { 00:10:44.234 "name": null, 00:10:44.234 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:44.234 "is_configured": false, 00:10:44.234 "data_offset": 0, 00:10:44.234 "data_size": 65536 00:10:44.234 }, 00:10:44.234 { 00:10:44.234 "name": "BaseBdev3", 00:10:44.234 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:44.234 "is_configured": true, 00:10:44.234 "data_offset": 0, 00:10:44.234 "data_size": 65536 00:10:44.234 } 00:10:44.234 ] 00:10:44.234 }' 00:10:44.234 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:44.234 18:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.492 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.492 18:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:44.750 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:44.750 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:45.009 [2024-07-15 18:24:37.376455] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.267 "name": "Existed_Raid", 00:10:45.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.267 "strip_size_kb": 64, 00:10:45.267 "state": "configuring", 00:10:45.267 "raid_level": "concat", 00:10:45.267 "superblock": false, 00:10:45.267 "num_base_bdevs": 3, 00:10:45.267 "num_base_bdevs_discovered": 1, 00:10:45.267 "num_base_bdevs_operational": 3, 00:10:45.267 "base_bdevs_list": [ 00:10:45.267 { 00:10:45.267 "name": "BaseBdev1", 00:10:45.267 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:45.267 "is_configured": true, 00:10:45.267 "data_offset": 0, 00:10:45.267 "data_size": 65536 00:10:45.267 }, 00:10:45.267 { 00:10:45.267 "name": null, 00:10:45.267 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:45.267 "is_configured": false, 00:10:45.267 "data_offset": 0, 00:10:45.267 "data_size": 65536 00:10:45.267 }, 00:10:45.267 { 00:10:45.267 "name": null, 00:10:45.267 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:45.267 "is_configured": false, 00:10:45.267 "data_offset": 0, 00:10:45.267 "data_size": 65536 00:10:45.267 } 00:10:45.267 ] 00:10:45.267 }' 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.267 18:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.833 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.833 18:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:45.833 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:45.833 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:46.135 [2024-07-15 18:24:38.372515] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.135 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.393 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.393 "name": "Existed_Raid", 00:10:46.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.393 "strip_size_kb": 64, 00:10:46.393 "state": "configuring", 00:10:46.393 "raid_level": "concat", 00:10:46.393 "superblock": false, 00:10:46.393 "num_base_bdevs": 3, 00:10:46.393 "num_base_bdevs_discovered": 2, 00:10:46.393 "num_base_bdevs_operational": 3, 00:10:46.393 "base_bdevs_list": [ 00:10:46.393 { 00:10:46.393 "name": "BaseBdev1", 00:10:46.393 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:46.393 "is_configured": true, 00:10:46.393 "data_offset": 0, 00:10:46.393 "data_size": 65536 00:10:46.393 }, 00:10:46.393 { 00:10:46.393 "name": null, 00:10:46.393 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:46.393 "is_configured": false, 00:10:46.393 "data_offset": 0, 00:10:46.393 "data_size": 65536 00:10:46.393 }, 00:10:46.393 { 00:10:46.393 "name": "BaseBdev3", 00:10:46.393 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:46.393 "is_configured": true, 00:10:46.393 "data_offset": 0, 00:10:46.393 "data_size": 65536 00:10:46.393 } 00:10:46.393 ] 00:10:46.393 }' 00:10:46.393 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.393 18:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.652 18:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.911 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:46.911 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:47.169 [2024-07-15 18:24:39.540587] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.429 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.695 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.695 "name": "Existed_Raid", 00:10:47.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.695 "strip_size_kb": 64, 00:10:47.695 "state": "configuring", 00:10:47.695 "raid_level": "concat", 00:10:47.695 "superblock": false, 00:10:47.695 "num_base_bdevs": 3, 00:10:47.695 "num_base_bdevs_discovered": 1, 00:10:47.695 "num_base_bdevs_operational": 3, 00:10:47.695 "base_bdevs_list": [ 00:10:47.695 { 00:10:47.695 "name": null, 00:10:47.695 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:47.695 "is_configured": false, 00:10:47.695 "data_offset": 0, 00:10:47.695 "data_size": 65536 00:10:47.695 }, 00:10:47.695 { 00:10:47.695 "name": null, 00:10:47.695 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:47.695 "is_configured": false, 00:10:47.695 "data_offset": 0, 00:10:47.695 "data_size": 65536 00:10:47.695 }, 00:10:47.695 { 00:10:47.695 "name": "BaseBdev3", 00:10:47.695 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:47.695 "is_configured": true, 00:10:47.695 "data_offset": 0, 00:10:47.695 "data_size": 65536 00:10:47.695 } 00:10:47.695 ] 00:10:47.695 }' 00:10:47.695 18:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.695 18:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.953 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.953 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.212 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:48.212 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:48.470 [2024-07-15 18:24:40.689047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.470 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.728 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:48.728 "name": "Existed_Raid", 00:10:48.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.728 "strip_size_kb": 64, 00:10:48.728 "state": "configuring", 00:10:48.728 "raid_level": "concat", 00:10:48.728 "superblock": false, 00:10:48.728 "num_base_bdevs": 3, 00:10:48.728 "num_base_bdevs_discovered": 2, 00:10:48.728 "num_base_bdevs_operational": 3, 00:10:48.728 "base_bdevs_list": [ 00:10:48.728 { 00:10:48.728 "name": null, 00:10:48.728 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:48.728 "is_configured": false, 00:10:48.728 "data_offset": 0, 00:10:48.728 "data_size": 65536 00:10:48.728 }, 00:10:48.728 { 00:10:48.728 "name": "BaseBdev2", 00:10:48.728 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:48.728 "is_configured": true, 00:10:48.728 "data_offset": 0, 00:10:48.728 "data_size": 65536 00:10:48.728 }, 00:10:48.728 { 00:10:48.728 "name": "BaseBdev3", 00:10:48.728 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:48.728 "is_configured": true, 00:10:48.728 "data_offset": 0, 00:10:48.728 "data_size": 65536 00:10:48.728 } 00:10:48.728 ] 00:10:48.728 }' 00:10:48.728 18:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:48.728 18:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.987 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.987 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:49.246 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:49.246 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:49.247 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.506 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7d33ce68-42d7-11ef-9ade-d5fc5159efa5 00:10:49.765 [2024-07-15 18:24:42.093268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:49.765 [2024-07-15 18:24:42.093298] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e62ba434a00 00:10:49.765 [2024-07-15 18:24:42.093303] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:49.765 [2024-07-15 18:24:42.093326] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e62ba497e20 00:10:49.765 [2024-07-15 18:24:42.093402] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e62ba434a00 00:10:49.765 [2024-07-15 18:24:42.093407] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2e62ba434a00 00:10:49.765 [2024-07-15 18:24:42.093440] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.765 NewBaseBdev 00:10:49.765 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:49.765 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:10:49.765 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:49.765 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:49.765 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:49.765 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:49.765 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:50.024 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:50.285 [ 00:10:50.285 { 00:10:50.285 "name": "NewBaseBdev", 00:10:50.286 "aliases": [ 00:10:50.286 "7d33ce68-42d7-11ef-9ade-d5fc5159efa5" 00:10:50.286 ], 00:10:50.286 "product_name": "Malloc disk", 00:10:50.286 "block_size": 512, 00:10:50.286 "num_blocks": 65536, 00:10:50.286 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:50.286 "assigned_rate_limits": { 00:10:50.286 "rw_ios_per_sec": 0, 00:10:50.286 "rw_mbytes_per_sec": 0, 00:10:50.286 "r_mbytes_per_sec": 0, 00:10:50.286 "w_mbytes_per_sec": 0 00:10:50.286 }, 00:10:50.286 "claimed": true, 00:10:50.286 "claim_type": "exclusive_write", 00:10:50.286 "zoned": false, 00:10:50.286 "supported_io_types": { 00:10:50.286 "read": true, 00:10:50.286 "write": true, 00:10:50.286 "unmap": true, 00:10:50.286 "flush": true, 00:10:50.286 "reset": true, 00:10:50.286 "nvme_admin": false, 00:10:50.286 "nvme_io": false, 00:10:50.286 "nvme_io_md": false, 00:10:50.286 "write_zeroes": true, 00:10:50.286 "zcopy": true, 00:10:50.286 "get_zone_info": false, 00:10:50.286 "zone_management": false, 00:10:50.286 "zone_append": false, 00:10:50.286 "compare": false, 00:10:50.286 "compare_and_write": false, 00:10:50.286 "abort": true, 00:10:50.286 "seek_hole": false, 00:10:50.286 "seek_data": false, 00:10:50.286 "copy": true, 00:10:50.286 "nvme_iov_md": false 00:10:50.286 }, 00:10:50.286 "memory_domains": [ 00:10:50.286 { 00:10:50.286 "dma_device_id": "system", 00:10:50.286 "dma_device_type": 1 00:10:50.286 }, 00:10:50.286 { 00:10:50.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.286 "dma_device_type": 2 00:10:50.286 } 00:10:50.286 ], 00:10:50.286 "driver_specific": {} 00:10:50.286 } 00:10:50.286 ] 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.286 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.545 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:50.545 "name": "Existed_Raid", 00:10:50.545 "uuid": "80f84de3-42d7-11ef-9ade-d5fc5159efa5", 00:10:50.545 "strip_size_kb": 64, 00:10:50.545 "state": "online", 00:10:50.545 "raid_level": "concat", 00:10:50.545 "superblock": false, 00:10:50.545 "num_base_bdevs": 3, 00:10:50.545 "num_base_bdevs_discovered": 3, 00:10:50.545 "num_base_bdevs_operational": 3, 00:10:50.545 "base_bdevs_list": [ 00:10:50.545 { 00:10:50.545 "name": "NewBaseBdev", 00:10:50.545 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:50.545 "is_configured": true, 00:10:50.545 "data_offset": 0, 00:10:50.545 "data_size": 65536 00:10:50.545 }, 00:10:50.545 { 00:10:50.545 "name": "BaseBdev2", 00:10:50.545 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:50.545 "is_configured": true, 00:10:50.545 "data_offset": 0, 00:10:50.545 "data_size": 65536 00:10:50.545 }, 00:10:50.545 { 00:10:50.545 "name": "BaseBdev3", 00:10:50.545 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:50.545 "is_configured": true, 00:10:50.545 "data_offset": 0, 00:10:50.545 "data_size": 65536 00:10:50.545 } 00:10:50.545 ] 00:10:50.545 }' 00:10:50.545 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:50.545 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.804 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.804 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:50.804 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:50.804 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:50.804 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:50.804 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:51.068 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:51.068 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:51.328 [2024-07-15 18:24:43.473255] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.328 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:51.328 "name": "Existed_Raid", 00:10:51.328 "aliases": [ 00:10:51.328 "80f84de3-42d7-11ef-9ade-d5fc5159efa5" 00:10:51.328 ], 00:10:51.328 "product_name": "Raid Volume", 00:10:51.328 "block_size": 512, 00:10:51.328 "num_blocks": 196608, 00:10:51.328 "uuid": "80f84de3-42d7-11ef-9ade-d5fc5159efa5", 00:10:51.328 "assigned_rate_limits": { 00:10:51.328 "rw_ios_per_sec": 0, 00:10:51.328 "rw_mbytes_per_sec": 0, 00:10:51.328 "r_mbytes_per_sec": 0, 00:10:51.328 "w_mbytes_per_sec": 0 00:10:51.328 }, 00:10:51.328 "claimed": false, 00:10:51.328 "zoned": false, 00:10:51.328 "supported_io_types": { 00:10:51.328 "read": true, 00:10:51.328 "write": true, 00:10:51.328 "unmap": true, 00:10:51.328 "flush": true, 00:10:51.328 "reset": true, 00:10:51.328 "nvme_admin": false, 00:10:51.328 "nvme_io": false, 00:10:51.328 "nvme_io_md": false, 00:10:51.329 "write_zeroes": true, 00:10:51.329 "zcopy": false, 00:10:51.329 "get_zone_info": false, 00:10:51.329 "zone_management": false, 00:10:51.329 "zone_append": false, 00:10:51.329 "compare": false, 00:10:51.329 "compare_and_write": false, 00:10:51.329 "abort": false, 00:10:51.329 "seek_hole": false, 00:10:51.329 "seek_data": false, 00:10:51.329 "copy": false, 00:10:51.329 "nvme_iov_md": false 00:10:51.329 }, 00:10:51.329 "memory_domains": [ 00:10:51.329 { 00:10:51.329 "dma_device_id": "system", 00:10:51.329 "dma_device_type": 1 00:10:51.329 }, 00:10:51.329 { 00:10:51.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.329 "dma_device_type": 2 00:10:51.329 }, 00:10:51.329 { 00:10:51.329 "dma_device_id": "system", 00:10:51.329 "dma_device_type": 1 00:10:51.329 }, 00:10:51.329 { 00:10:51.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.329 "dma_device_type": 2 00:10:51.329 }, 00:10:51.329 { 00:10:51.329 "dma_device_id": "system", 00:10:51.329 "dma_device_type": 1 00:10:51.329 }, 00:10:51.329 { 00:10:51.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.329 "dma_device_type": 2 00:10:51.329 } 00:10:51.329 ], 00:10:51.329 "driver_specific": { 00:10:51.329 "raid": { 00:10:51.329 "uuid": "80f84de3-42d7-11ef-9ade-d5fc5159efa5", 00:10:51.329 "strip_size_kb": 64, 00:10:51.329 "state": "online", 00:10:51.329 "raid_level": "concat", 00:10:51.329 "superblock": false, 00:10:51.329 "num_base_bdevs": 3, 00:10:51.329 "num_base_bdevs_discovered": 3, 00:10:51.329 "num_base_bdevs_operational": 3, 00:10:51.329 "base_bdevs_list": [ 00:10:51.329 { 00:10:51.329 "name": "NewBaseBdev", 00:10:51.329 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:51.329 "is_configured": true, 00:10:51.329 "data_offset": 0, 00:10:51.329 "data_size": 65536 00:10:51.329 }, 00:10:51.329 { 00:10:51.329 "name": "BaseBdev2", 00:10:51.329 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:51.329 "is_configured": true, 00:10:51.329 "data_offset": 0, 00:10:51.329 "data_size": 65536 00:10:51.329 }, 00:10:51.329 { 00:10:51.329 "name": "BaseBdev3", 00:10:51.329 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:51.329 "is_configured": true, 00:10:51.329 "data_offset": 0, 00:10:51.329 "data_size": 65536 00:10:51.329 } 00:10:51.329 ] 00:10:51.329 } 00:10:51.329 } 00:10:51.329 }' 00:10:51.329 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.329 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:51.329 BaseBdev2 00:10:51.329 BaseBdev3' 00:10:51.329 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:51.329 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:51.329 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:51.588 "name": "NewBaseBdev", 00:10:51.588 "aliases": [ 00:10:51.588 "7d33ce68-42d7-11ef-9ade-d5fc5159efa5" 00:10:51.588 ], 00:10:51.588 "product_name": "Malloc disk", 00:10:51.588 "block_size": 512, 00:10:51.588 "num_blocks": 65536, 00:10:51.588 "uuid": "7d33ce68-42d7-11ef-9ade-d5fc5159efa5", 00:10:51.588 "assigned_rate_limits": { 00:10:51.588 "rw_ios_per_sec": 0, 00:10:51.588 "rw_mbytes_per_sec": 0, 00:10:51.588 "r_mbytes_per_sec": 0, 00:10:51.588 "w_mbytes_per_sec": 0 00:10:51.588 }, 00:10:51.588 "claimed": true, 00:10:51.588 "claim_type": "exclusive_write", 00:10:51.588 "zoned": false, 00:10:51.588 "supported_io_types": { 00:10:51.588 "read": true, 00:10:51.588 "write": true, 00:10:51.588 "unmap": true, 00:10:51.588 "flush": true, 00:10:51.588 "reset": true, 00:10:51.588 "nvme_admin": false, 00:10:51.588 "nvme_io": false, 00:10:51.588 "nvme_io_md": false, 00:10:51.588 "write_zeroes": true, 00:10:51.588 "zcopy": true, 00:10:51.588 "get_zone_info": false, 00:10:51.588 "zone_management": false, 00:10:51.588 "zone_append": false, 00:10:51.588 "compare": false, 00:10:51.588 "compare_and_write": false, 00:10:51.588 "abort": true, 00:10:51.588 "seek_hole": false, 00:10:51.588 "seek_data": false, 00:10:51.588 "copy": true, 00:10:51.588 "nvme_iov_md": false 00:10:51.588 }, 00:10:51.588 "memory_domains": [ 00:10:51.588 { 00:10:51.588 "dma_device_id": "system", 00:10:51.588 "dma_device_type": 1 00:10:51.588 }, 00:10:51.588 { 00:10:51.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.588 "dma_device_type": 2 00:10:51.588 } 00:10:51.588 ], 00:10:51.588 "driver_specific": {} 00:10:51.588 }' 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:51.588 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:51.847 "name": "BaseBdev2", 00:10:51.847 "aliases": [ 00:10:51.847 "7b2161df-42d7-11ef-9ade-d5fc5159efa5" 00:10:51.847 ], 00:10:51.847 "product_name": "Malloc disk", 00:10:51.847 "block_size": 512, 00:10:51.847 "num_blocks": 65536, 00:10:51.847 "uuid": "7b2161df-42d7-11ef-9ade-d5fc5159efa5", 00:10:51.847 "assigned_rate_limits": { 00:10:51.847 "rw_ios_per_sec": 0, 00:10:51.847 "rw_mbytes_per_sec": 0, 00:10:51.847 "r_mbytes_per_sec": 0, 00:10:51.847 "w_mbytes_per_sec": 0 00:10:51.847 }, 00:10:51.847 "claimed": true, 00:10:51.847 "claim_type": "exclusive_write", 00:10:51.847 "zoned": false, 00:10:51.847 "supported_io_types": { 00:10:51.847 "read": true, 00:10:51.847 "write": true, 00:10:51.847 "unmap": true, 00:10:51.847 "flush": true, 00:10:51.847 "reset": true, 00:10:51.847 "nvme_admin": false, 00:10:51.847 "nvme_io": false, 00:10:51.847 "nvme_io_md": false, 00:10:51.847 "write_zeroes": true, 00:10:51.847 "zcopy": true, 00:10:51.847 "get_zone_info": false, 00:10:51.847 "zone_management": false, 00:10:51.847 "zone_append": false, 00:10:51.847 "compare": false, 00:10:51.847 "compare_and_write": false, 00:10:51.847 "abort": true, 00:10:51.847 "seek_hole": false, 00:10:51.847 "seek_data": false, 00:10:51.847 "copy": true, 00:10:51.847 "nvme_iov_md": false 00:10:51.847 }, 00:10:51.847 "memory_domains": [ 00:10:51.847 { 00:10:51.847 "dma_device_id": "system", 00:10:51.847 "dma_device_type": 1 00:10:51.847 }, 00:10:51.847 { 00:10:51.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.847 "dma_device_type": 2 00:10:51.847 } 00:10:51.847 ], 00:10:51.847 "driver_specific": {} 00:10:51.847 }' 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:51.847 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:52.413 "name": "BaseBdev3", 00:10:52.413 "aliases": [ 00:10:52.413 "7b92e96d-42d7-11ef-9ade-d5fc5159efa5" 00:10:52.413 ], 00:10:52.413 "product_name": "Malloc disk", 00:10:52.413 "block_size": 512, 00:10:52.413 "num_blocks": 65536, 00:10:52.413 "uuid": "7b92e96d-42d7-11ef-9ade-d5fc5159efa5", 00:10:52.413 "assigned_rate_limits": { 00:10:52.413 "rw_ios_per_sec": 0, 00:10:52.413 "rw_mbytes_per_sec": 0, 00:10:52.413 "r_mbytes_per_sec": 0, 00:10:52.413 "w_mbytes_per_sec": 0 00:10:52.413 }, 00:10:52.413 "claimed": true, 00:10:52.413 "claim_type": "exclusive_write", 00:10:52.413 "zoned": false, 00:10:52.413 "supported_io_types": { 00:10:52.413 "read": true, 00:10:52.413 "write": true, 00:10:52.413 "unmap": true, 00:10:52.413 "flush": true, 00:10:52.413 "reset": true, 00:10:52.413 "nvme_admin": false, 00:10:52.413 "nvme_io": false, 00:10:52.413 "nvme_io_md": false, 00:10:52.413 "write_zeroes": true, 00:10:52.413 "zcopy": true, 00:10:52.413 "get_zone_info": false, 00:10:52.413 "zone_management": false, 00:10:52.413 "zone_append": false, 00:10:52.413 "compare": false, 00:10:52.413 "compare_and_write": false, 00:10:52.413 "abort": true, 00:10:52.413 "seek_hole": false, 00:10:52.413 "seek_data": false, 00:10:52.413 "copy": true, 00:10:52.413 "nvme_iov_md": false 00:10:52.413 }, 00:10:52.413 "memory_domains": [ 00:10:52.413 { 00:10:52.413 "dma_device_id": "system", 00:10:52.413 "dma_device_type": 1 00:10:52.413 }, 00:10:52.413 { 00:10:52.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.413 "dma_device_type": 2 00:10:52.413 } 00:10:52.413 ], 00:10:52.413 "driver_specific": {} 00:10:52.413 }' 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:52.413 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:52.671 [2024-07-15 18:24:44.821290] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.671 [2024-07-15 18:24:44.821316] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.671 [2024-07-15 18:24:44.821338] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.671 [2024-07-15 18:24:44.821352] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.671 [2024-07-15 18:24:44.821357] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e62ba434a00 name Existed_Raid, state offline 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 54052 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 54052 ']' 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 54052 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 54052 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:52.672 killing process with pid 54052 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54052' 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 54052 00:10:52.672 [2024-07-15 18:24:44.853386] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.672 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 54052 00:10:52.672 [2024-07-15 18:24:44.876308] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:52.930 00:10:52.930 real 0m24.209s 00:10:52.930 user 0m43.916s 00:10:52.930 sys 0m3.594s 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.930 ************************************ 00:10:52.930 END TEST raid_state_function_test 00:10:52.930 ************************************ 00:10:52.930 18:24:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:52.930 18:24:45 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:52.930 18:24:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:52.930 18:24:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.930 18:24:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.930 ************************************ 00:10:52.930 START TEST raid_state_function_test_sb 00:10:52.930 ************************************ 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=54781 00:10:52.930 Process raid pid: 54781 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54781' 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 54781 /var/tmp/spdk-raid.sock 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 54781 ']' 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.930 18:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.930 [2024-07-15 18:24:45.154505] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:10:52.930 [2024-07-15 18:24:45.154705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:53.495 EAL: TSC is not safe to use in SMP mode 00:10:53.495 EAL: TSC is not invariant 00:10:53.495 [2024-07-15 18:24:45.748262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.495 [2024-07-15 18:24:45.857092] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:53.495 [2024-07-15 18:24:45.859185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.495 [2024-07-15 18:24:45.859965] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.495 [2024-07-15 18:24:45.859986] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.061 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.061 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:54.061 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:54.367 [2024-07-15 18:24:46.451757] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.367 [2024-07-15 18:24:46.451816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.367 [2024-07-15 18:24:46.451822] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.367 [2024-07-15 18:24:46.451832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.367 [2024-07-15 18:24:46.451836] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.367 [2024-07-15 18:24:46.451844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.367 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.625 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.625 "name": "Existed_Raid", 00:10:54.625 "uuid": "8391596c-42d7-11ef-9ade-d5fc5159efa5", 00:10:54.625 "strip_size_kb": 64, 00:10:54.625 "state": "configuring", 00:10:54.625 "raid_level": "concat", 00:10:54.625 "superblock": true, 00:10:54.625 "num_base_bdevs": 3, 00:10:54.625 "num_base_bdevs_discovered": 0, 00:10:54.625 "num_base_bdevs_operational": 3, 00:10:54.625 "base_bdevs_list": [ 00:10:54.625 { 00:10:54.625 "name": "BaseBdev1", 00:10:54.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.626 "is_configured": false, 00:10:54.626 "data_offset": 0, 00:10:54.626 "data_size": 0 00:10:54.626 }, 00:10:54.626 { 00:10:54.626 "name": "BaseBdev2", 00:10:54.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.626 "is_configured": false, 00:10:54.626 "data_offset": 0, 00:10:54.626 "data_size": 0 00:10:54.626 }, 00:10:54.626 { 00:10:54.626 "name": "BaseBdev3", 00:10:54.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.626 "is_configured": false, 00:10:54.626 "data_offset": 0, 00:10:54.626 "data_size": 0 00:10:54.626 } 00:10:54.626 ] 00:10:54.626 }' 00:10:54.626 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.626 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.883 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:55.142 [2024-07-15 18:24:47.299749] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.142 [2024-07-15 18:24:47.299778] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x169064434500 name Existed_Raid, state configuring 00:10:55.142 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:55.399 [2024-07-15 18:24:47.575737] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.399 [2024-07-15 18:24:47.575789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.399 [2024-07-15 18:24:47.575795] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.399 [2024-07-15 18:24:47.575805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.399 [2024-07-15 18:24:47.575809] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.399 [2024-07-15 18:24:47.575817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.399 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.657 [2024-07-15 18:24:47.896757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.657 BaseBdev1 00:10:55.657 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:55.657 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:55.657 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:55.657 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:55.657 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:55.657 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:55.657 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:55.915 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.173 [ 00:10:56.173 { 00:10:56.173 "name": "BaseBdev1", 00:10:56.173 "aliases": [ 00:10:56.173 "846daeaa-42d7-11ef-9ade-d5fc5159efa5" 00:10:56.173 ], 00:10:56.173 "product_name": "Malloc disk", 00:10:56.173 "block_size": 512, 00:10:56.173 "num_blocks": 65536, 00:10:56.173 "uuid": "846daeaa-42d7-11ef-9ade-d5fc5159efa5", 00:10:56.173 "assigned_rate_limits": { 00:10:56.173 "rw_ios_per_sec": 0, 00:10:56.174 "rw_mbytes_per_sec": 0, 00:10:56.174 "r_mbytes_per_sec": 0, 00:10:56.174 "w_mbytes_per_sec": 0 00:10:56.174 }, 00:10:56.174 "claimed": true, 00:10:56.174 "claim_type": "exclusive_write", 00:10:56.174 "zoned": false, 00:10:56.174 "supported_io_types": { 00:10:56.174 "read": true, 00:10:56.174 "write": true, 00:10:56.174 "unmap": true, 00:10:56.174 "flush": true, 00:10:56.174 "reset": true, 00:10:56.174 "nvme_admin": false, 00:10:56.174 "nvme_io": false, 00:10:56.174 "nvme_io_md": false, 00:10:56.174 "write_zeroes": true, 00:10:56.174 "zcopy": true, 00:10:56.174 "get_zone_info": false, 00:10:56.174 "zone_management": false, 00:10:56.174 "zone_append": false, 00:10:56.174 "compare": false, 00:10:56.174 "compare_and_write": false, 00:10:56.174 "abort": true, 00:10:56.174 "seek_hole": false, 00:10:56.174 "seek_data": false, 00:10:56.174 "copy": true, 00:10:56.174 "nvme_iov_md": false 00:10:56.174 }, 00:10:56.174 "memory_domains": [ 00:10:56.174 { 00:10:56.174 "dma_device_id": "system", 00:10:56.174 "dma_device_type": 1 00:10:56.174 }, 00:10:56.174 { 00:10:56.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.174 "dma_device_type": 2 00:10:56.174 } 00:10:56.174 ], 00:10:56.174 "driver_specific": {} 00:10:56.174 } 00:10:56.174 ] 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.174 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.432 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:56.432 "name": "Existed_Raid", 00:10:56.432 "uuid": "843cdafe-42d7-11ef-9ade-d5fc5159efa5", 00:10:56.432 "strip_size_kb": 64, 00:10:56.432 "state": "configuring", 00:10:56.432 "raid_level": "concat", 00:10:56.432 "superblock": true, 00:10:56.432 "num_base_bdevs": 3, 00:10:56.432 "num_base_bdevs_discovered": 1, 00:10:56.432 "num_base_bdevs_operational": 3, 00:10:56.432 "base_bdevs_list": [ 00:10:56.432 { 00:10:56.432 "name": "BaseBdev1", 00:10:56.432 "uuid": "846daeaa-42d7-11ef-9ade-d5fc5159efa5", 00:10:56.432 "is_configured": true, 00:10:56.432 "data_offset": 2048, 00:10:56.432 "data_size": 63488 00:10:56.432 }, 00:10:56.432 { 00:10:56.432 "name": "BaseBdev2", 00:10:56.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.433 "is_configured": false, 00:10:56.433 "data_offset": 0, 00:10:56.433 "data_size": 0 00:10:56.433 }, 00:10:56.433 { 00:10:56.433 "name": "BaseBdev3", 00:10:56.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.433 "is_configured": false, 00:10:56.433 "data_offset": 0, 00:10:56.433 "data_size": 0 00:10:56.433 } 00:10:56.433 ] 00:10:56.433 }' 00:10:56.433 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:56.433 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.692 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:56.951 [2024-07-15 18:24:49.247650] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.951 [2024-07-15 18:24:49.247696] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x169064434500 name Existed_Raid, state configuring 00:10:56.951 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:57.210 [2024-07-15 18:24:49.531652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.210 [2024-07-15 18:24:49.532475] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.210 [2024-07-15 18:24:49.532517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.210 [2024-07-15 18:24:49.532522] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.210 [2024-07-15 18:24:49.532532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.210 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.777 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.777 "name": "Existed_Raid", 00:10:57.777 "uuid": "85674db8-42d7-11ef-9ade-d5fc5159efa5", 00:10:57.777 "strip_size_kb": 64, 00:10:57.777 "state": "configuring", 00:10:57.777 "raid_level": "concat", 00:10:57.777 "superblock": true, 00:10:57.777 "num_base_bdevs": 3, 00:10:57.777 "num_base_bdevs_discovered": 1, 00:10:57.777 "num_base_bdevs_operational": 3, 00:10:57.777 "base_bdevs_list": [ 00:10:57.777 { 00:10:57.777 "name": "BaseBdev1", 00:10:57.777 "uuid": "846daeaa-42d7-11ef-9ade-d5fc5159efa5", 00:10:57.777 "is_configured": true, 00:10:57.777 "data_offset": 2048, 00:10:57.777 "data_size": 63488 00:10:57.777 }, 00:10:57.777 { 00:10:57.777 "name": "BaseBdev2", 00:10:57.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.777 "is_configured": false, 00:10:57.777 "data_offset": 0, 00:10:57.777 "data_size": 0 00:10:57.777 }, 00:10:57.777 { 00:10:57.777 "name": "BaseBdev3", 00:10:57.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.777 "is_configured": false, 00:10:57.777 "data_offset": 0, 00:10:57.777 "data_size": 0 00:10:57.777 } 00:10:57.777 ] 00:10:57.777 }' 00:10:57.777 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.777 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.036 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.294 [2024-07-15 18:24:50.463742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.294 BaseBdev2 00:10:58.294 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:58.294 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:58.294 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:58.294 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:58.294 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:58.294 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:58.294 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:58.553 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.812 [ 00:10:58.812 { 00:10:58.812 "name": "BaseBdev2", 00:10:58.812 "aliases": [ 00:10:58.812 "85f58253-42d7-11ef-9ade-d5fc5159efa5" 00:10:58.812 ], 00:10:58.812 "product_name": "Malloc disk", 00:10:58.812 "block_size": 512, 00:10:58.812 "num_blocks": 65536, 00:10:58.812 "uuid": "85f58253-42d7-11ef-9ade-d5fc5159efa5", 00:10:58.812 "assigned_rate_limits": { 00:10:58.812 "rw_ios_per_sec": 0, 00:10:58.812 "rw_mbytes_per_sec": 0, 00:10:58.812 "r_mbytes_per_sec": 0, 00:10:58.812 "w_mbytes_per_sec": 0 00:10:58.812 }, 00:10:58.812 "claimed": true, 00:10:58.812 "claim_type": "exclusive_write", 00:10:58.812 "zoned": false, 00:10:58.812 "supported_io_types": { 00:10:58.812 "read": true, 00:10:58.812 "write": true, 00:10:58.812 "unmap": true, 00:10:58.812 "flush": true, 00:10:58.812 "reset": true, 00:10:58.812 "nvme_admin": false, 00:10:58.812 "nvme_io": false, 00:10:58.812 "nvme_io_md": false, 00:10:58.812 "write_zeroes": true, 00:10:58.812 "zcopy": true, 00:10:58.812 "get_zone_info": false, 00:10:58.812 "zone_management": false, 00:10:58.812 "zone_append": false, 00:10:58.812 "compare": false, 00:10:58.812 "compare_and_write": false, 00:10:58.812 "abort": true, 00:10:58.812 "seek_hole": false, 00:10:58.812 "seek_data": false, 00:10:58.812 "copy": true, 00:10:58.812 "nvme_iov_md": false 00:10:58.812 }, 00:10:58.812 "memory_domains": [ 00:10:58.812 { 00:10:58.812 "dma_device_id": "system", 00:10:58.812 "dma_device_type": 1 00:10:58.812 }, 00:10:58.812 { 00:10:58.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.812 "dma_device_type": 2 00:10:58.812 } 00:10:58.812 ], 00:10:58.812 "driver_specific": {} 00:10:58.812 } 00:10:58.812 ] 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.812 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.071 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:59.071 "name": "Existed_Raid", 00:10:59.071 "uuid": "85674db8-42d7-11ef-9ade-d5fc5159efa5", 00:10:59.071 "strip_size_kb": 64, 00:10:59.071 "state": "configuring", 00:10:59.071 "raid_level": "concat", 00:10:59.071 "superblock": true, 00:10:59.071 "num_base_bdevs": 3, 00:10:59.071 "num_base_bdevs_discovered": 2, 00:10:59.071 "num_base_bdevs_operational": 3, 00:10:59.071 "base_bdevs_list": [ 00:10:59.071 { 00:10:59.071 "name": "BaseBdev1", 00:10:59.071 "uuid": "846daeaa-42d7-11ef-9ade-d5fc5159efa5", 00:10:59.071 "is_configured": true, 00:10:59.071 "data_offset": 2048, 00:10:59.071 "data_size": 63488 00:10:59.071 }, 00:10:59.071 { 00:10:59.071 "name": "BaseBdev2", 00:10:59.071 "uuid": "85f58253-42d7-11ef-9ade-d5fc5159efa5", 00:10:59.071 "is_configured": true, 00:10:59.071 "data_offset": 2048, 00:10:59.071 "data_size": 63488 00:10:59.071 }, 00:10:59.071 { 00:10:59.071 "name": "BaseBdev3", 00:10:59.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.071 "is_configured": false, 00:10:59.071 "data_offset": 0, 00:10:59.071 "data_size": 0 00:10:59.071 } 00:10:59.071 ] 00:10:59.071 }' 00:10:59.071 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:59.071 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.329 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.587 [2024-07-15 18:24:51.759672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.587 [2024-07-15 18:24:51.759739] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x169064434a00 00:10:59.587 [2024-07-15 18:24:51.759746] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.587 [2024-07-15 18:24:51.759768] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x169064497e20 00:10:59.587 [2024-07-15 18:24:51.759822] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x169064434a00 00:10:59.587 [2024-07-15 18:24:51.759826] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x169064434a00 00:10:59.587 [2024-07-15 18:24:51.759848] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.587 BaseBdev3 00:10:59.588 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:59.588 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:59.588 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:59.588 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:59.588 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:59.588 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:59.588 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:59.846 18:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:00.105 [ 00:11:00.105 { 00:11:00.105 "name": "BaseBdev3", 00:11:00.105 "aliases": [ 00:11:00.105 "86bb4110-42d7-11ef-9ade-d5fc5159efa5" 00:11:00.105 ], 00:11:00.105 "product_name": "Malloc disk", 00:11:00.105 "block_size": 512, 00:11:00.105 "num_blocks": 65536, 00:11:00.105 "uuid": "86bb4110-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.105 "assigned_rate_limits": { 00:11:00.105 "rw_ios_per_sec": 0, 00:11:00.105 "rw_mbytes_per_sec": 0, 00:11:00.105 "r_mbytes_per_sec": 0, 00:11:00.105 "w_mbytes_per_sec": 0 00:11:00.105 }, 00:11:00.105 "claimed": true, 00:11:00.105 "claim_type": "exclusive_write", 00:11:00.105 "zoned": false, 00:11:00.105 "supported_io_types": { 00:11:00.105 "read": true, 00:11:00.105 "write": true, 00:11:00.105 "unmap": true, 00:11:00.105 "flush": true, 00:11:00.105 "reset": true, 00:11:00.105 "nvme_admin": false, 00:11:00.105 "nvme_io": false, 00:11:00.105 "nvme_io_md": false, 00:11:00.105 "write_zeroes": true, 00:11:00.105 "zcopy": true, 00:11:00.105 "get_zone_info": false, 00:11:00.105 "zone_management": false, 00:11:00.105 "zone_append": false, 00:11:00.105 "compare": false, 00:11:00.105 "compare_and_write": false, 00:11:00.105 "abort": true, 00:11:00.105 "seek_hole": false, 00:11:00.105 "seek_data": false, 00:11:00.105 "copy": true, 00:11:00.105 "nvme_iov_md": false 00:11:00.105 }, 00:11:00.105 "memory_domains": [ 00:11:00.105 { 00:11:00.105 "dma_device_id": "system", 00:11:00.105 "dma_device_type": 1 00:11:00.105 }, 00:11:00.105 { 00:11:00.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.105 "dma_device_type": 2 00:11:00.105 } 00:11:00.105 ], 00:11:00.105 "driver_specific": {} 00:11:00.105 } 00:11:00.105 ] 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.105 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.364 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.364 "name": "Existed_Raid", 00:11:00.364 "uuid": "85674db8-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.364 "strip_size_kb": 64, 00:11:00.364 "state": "online", 00:11:00.364 "raid_level": "concat", 00:11:00.364 "superblock": true, 00:11:00.364 "num_base_bdevs": 3, 00:11:00.364 "num_base_bdevs_discovered": 3, 00:11:00.364 "num_base_bdevs_operational": 3, 00:11:00.364 "base_bdevs_list": [ 00:11:00.364 { 00:11:00.364 "name": "BaseBdev1", 00:11:00.364 "uuid": "846daeaa-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.364 "is_configured": true, 00:11:00.364 "data_offset": 2048, 00:11:00.364 "data_size": 63488 00:11:00.364 }, 00:11:00.364 { 00:11:00.364 "name": "BaseBdev2", 00:11:00.364 "uuid": "85f58253-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.364 "is_configured": true, 00:11:00.364 "data_offset": 2048, 00:11:00.364 "data_size": 63488 00:11:00.364 }, 00:11:00.364 { 00:11:00.364 "name": "BaseBdev3", 00:11:00.364 "uuid": "86bb4110-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.364 "is_configured": true, 00:11:00.364 "data_offset": 2048, 00:11:00.364 "data_size": 63488 00:11:00.364 } 00:11:00.364 ] 00:11:00.364 }' 00:11:00.364 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.364 18:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.623 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.623 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:00.623 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:00.623 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:00.623 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:00.623 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:00.623 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:00.623 18:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:00.882 [2024-07-15 18:24:53.083509] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.882 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:00.882 "name": "Existed_Raid", 00:11:00.882 "aliases": [ 00:11:00.882 "85674db8-42d7-11ef-9ade-d5fc5159efa5" 00:11:00.882 ], 00:11:00.882 "product_name": "Raid Volume", 00:11:00.882 "block_size": 512, 00:11:00.882 "num_blocks": 190464, 00:11:00.882 "uuid": "85674db8-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.882 "assigned_rate_limits": { 00:11:00.882 "rw_ios_per_sec": 0, 00:11:00.882 "rw_mbytes_per_sec": 0, 00:11:00.882 "r_mbytes_per_sec": 0, 00:11:00.882 "w_mbytes_per_sec": 0 00:11:00.882 }, 00:11:00.882 "claimed": false, 00:11:00.882 "zoned": false, 00:11:00.882 "supported_io_types": { 00:11:00.882 "read": true, 00:11:00.882 "write": true, 00:11:00.882 "unmap": true, 00:11:00.882 "flush": true, 00:11:00.882 "reset": true, 00:11:00.882 "nvme_admin": false, 00:11:00.882 "nvme_io": false, 00:11:00.882 "nvme_io_md": false, 00:11:00.882 "write_zeroes": true, 00:11:00.882 "zcopy": false, 00:11:00.882 "get_zone_info": false, 00:11:00.882 "zone_management": false, 00:11:00.882 "zone_append": false, 00:11:00.882 "compare": false, 00:11:00.882 "compare_and_write": false, 00:11:00.882 "abort": false, 00:11:00.882 "seek_hole": false, 00:11:00.882 "seek_data": false, 00:11:00.882 "copy": false, 00:11:00.882 "nvme_iov_md": false 00:11:00.882 }, 00:11:00.882 "memory_domains": [ 00:11:00.882 { 00:11:00.882 "dma_device_id": "system", 00:11:00.882 "dma_device_type": 1 00:11:00.882 }, 00:11:00.882 { 00:11:00.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.882 "dma_device_type": 2 00:11:00.882 }, 00:11:00.882 { 00:11:00.882 "dma_device_id": "system", 00:11:00.882 "dma_device_type": 1 00:11:00.882 }, 00:11:00.882 { 00:11:00.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.882 "dma_device_type": 2 00:11:00.882 }, 00:11:00.882 { 00:11:00.882 "dma_device_id": "system", 00:11:00.882 "dma_device_type": 1 00:11:00.882 }, 00:11:00.882 { 00:11:00.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.882 "dma_device_type": 2 00:11:00.882 } 00:11:00.882 ], 00:11:00.882 "driver_specific": { 00:11:00.882 "raid": { 00:11:00.882 "uuid": "85674db8-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.882 "strip_size_kb": 64, 00:11:00.882 "state": "online", 00:11:00.882 "raid_level": "concat", 00:11:00.882 "superblock": true, 00:11:00.882 "num_base_bdevs": 3, 00:11:00.882 "num_base_bdevs_discovered": 3, 00:11:00.882 "num_base_bdevs_operational": 3, 00:11:00.882 "base_bdevs_list": [ 00:11:00.882 { 00:11:00.882 "name": "BaseBdev1", 00:11:00.882 "uuid": "846daeaa-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.882 "is_configured": true, 00:11:00.882 "data_offset": 2048, 00:11:00.882 "data_size": 63488 00:11:00.882 }, 00:11:00.882 { 00:11:00.882 "name": "BaseBdev2", 00:11:00.882 "uuid": "85f58253-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.882 "is_configured": true, 00:11:00.882 "data_offset": 2048, 00:11:00.882 "data_size": 63488 00:11:00.882 }, 00:11:00.882 { 00:11:00.883 "name": "BaseBdev3", 00:11:00.883 "uuid": "86bb4110-42d7-11ef-9ade-d5fc5159efa5", 00:11:00.883 "is_configured": true, 00:11:00.883 "data_offset": 2048, 00:11:00.883 "data_size": 63488 00:11:00.883 } 00:11:00.883 ] 00:11:00.883 } 00:11:00.883 } 00:11:00.883 }' 00:11:00.883 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.883 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:00.883 BaseBdev2 00:11:00.883 BaseBdev3' 00:11:00.883 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:00.883 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:00.883 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.140 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.140 "name": "BaseBdev1", 00:11:01.140 "aliases": [ 00:11:01.140 "846daeaa-42d7-11ef-9ade-d5fc5159efa5" 00:11:01.140 ], 00:11:01.140 "product_name": "Malloc disk", 00:11:01.140 "block_size": 512, 00:11:01.140 "num_blocks": 65536, 00:11:01.140 "uuid": "846daeaa-42d7-11ef-9ade-d5fc5159efa5", 00:11:01.140 "assigned_rate_limits": { 00:11:01.140 "rw_ios_per_sec": 0, 00:11:01.140 "rw_mbytes_per_sec": 0, 00:11:01.140 "r_mbytes_per_sec": 0, 00:11:01.140 "w_mbytes_per_sec": 0 00:11:01.140 }, 00:11:01.140 "claimed": true, 00:11:01.140 "claim_type": "exclusive_write", 00:11:01.140 "zoned": false, 00:11:01.140 "supported_io_types": { 00:11:01.140 "read": true, 00:11:01.140 "write": true, 00:11:01.140 "unmap": true, 00:11:01.140 "flush": true, 00:11:01.140 "reset": true, 00:11:01.140 "nvme_admin": false, 00:11:01.140 "nvme_io": false, 00:11:01.140 "nvme_io_md": false, 00:11:01.140 "write_zeroes": true, 00:11:01.140 "zcopy": true, 00:11:01.140 "get_zone_info": false, 00:11:01.140 "zone_management": false, 00:11:01.140 "zone_append": false, 00:11:01.140 "compare": false, 00:11:01.140 "compare_and_write": false, 00:11:01.140 "abort": true, 00:11:01.140 "seek_hole": false, 00:11:01.140 "seek_data": false, 00:11:01.140 "copy": true, 00:11:01.140 "nvme_iov_md": false 00:11:01.140 }, 00:11:01.140 "memory_domains": [ 00:11:01.140 { 00:11:01.140 "dma_device_id": "system", 00:11:01.140 "dma_device_type": 1 00:11:01.140 }, 00:11:01.140 { 00:11:01.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.140 "dma_device_type": 2 00:11:01.140 } 00:11:01.140 ], 00:11:01.140 "driver_specific": {} 00:11:01.140 }' 00:11:01.140 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.140 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:01.141 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.398 "name": "BaseBdev2", 00:11:01.398 "aliases": [ 00:11:01.398 "85f58253-42d7-11ef-9ade-d5fc5159efa5" 00:11:01.398 ], 00:11:01.398 "product_name": "Malloc disk", 00:11:01.398 "block_size": 512, 00:11:01.398 "num_blocks": 65536, 00:11:01.398 "uuid": "85f58253-42d7-11ef-9ade-d5fc5159efa5", 00:11:01.398 "assigned_rate_limits": { 00:11:01.398 "rw_ios_per_sec": 0, 00:11:01.398 "rw_mbytes_per_sec": 0, 00:11:01.398 "r_mbytes_per_sec": 0, 00:11:01.398 "w_mbytes_per_sec": 0 00:11:01.398 }, 00:11:01.398 "claimed": true, 00:11:01.398 "claim_type": "exclusive_write", 00:11:01.398 "zoned": false, 00:11:01.398 "supported_io_types": { 00:11:01.398 "read": true, 00:11:01.398 "write": true, 00:11:01.398 "unmap": true, 00:11:01.398 "flush": true, 00:11:01.398 "reset": true, 00:11:01.398 "nvme_admin": false, 00:11:01.398 "nvme_io": false, 00:11:01.398 "nvme_io_md": false, 00:11:01.398 "write_zeroes": true, 00:11:01.398 "zcopy": true, 00:11:01.398 "get_zone_info": false, 00:11:01.398 "zone_management": false, 00:11:01.398 "zone_append": false, 00:11:01.398 "compare": false, 00:11:01.398 "compare_and_write": false, 00:11:01.398 "abort": true, 00:11:01.398 "seek_hole": false, 00:11:01.398 "seek_data": false, 00:11:01.398 "copy": true, 00:11:01.398 "nvme_iov_md": false 00:11:01.398 }, 00:11:01.398 "memory_domains": [ 00:11:01.398 { 00:11:01.398 "dma_device_id": "system", 00:11:01.398 "dma_device_type": 1 00:11:01.398 }, 00:11:01.398 { 00:11:01.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.398 "dma_device_type": 2 00:11:01.398 } 00:11:01.398 ], 00:11:01.398 "driver_specific": {} 00:11:01.398 }' 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:01.398 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.658 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.658 "name": "BaseBdev3", 00:11:01.658 "aliases": [ 00:11:01.658 "86bb4110-42d7-11ef-9ade-d5fc5159efa5" 00:11:01.658 ], 00:11:01.658 "product_name": "Malloc disk", 00:11:01.658 "block_size": 512, 00:11:01.658 "num_blocks": 65536, 00:11:01.658 "uuid": "86bb4110-42d7-11ef-9ade-d5fc5159efa5", 00:11:01.658 "assigned_rate_limits": { 00:11:01.658 "rw_ios_per_sec": 0, 00:11:01.658 "rw_mbytes_per_sec": 0, 00:11:01.658 "r_mbytes_per_sec": 0, 00:11:01.658 "w_mbytes_per_sec": 0 00:11:01.658 }, 00:11:01.658 "claimed": true, 00:11:01.658 "claim_type": "exclusive_write", 00:11:01.658 "zoned": false, 00:11:01.658 "supported_io_types": { 00:11:01.658 "read": true, 00:11:01.658 "write": true, 00:11:01.658 "unmap": true, 00:11:01.658 "flush": true, 00:11:01.658 "reset": true, 00:11:01.658 "nvme_admin": false, 00:11:01.658 "nvme_io": false, 00:11:01.658 "nvme_io_md": false, 00:11:01.658 "write_zeroes": true, 00:11:01.658 "zcopy": true, 00:11:01.658 "get_zone_info": false, 00:11:01.658 "zone_management": false, 00:11:01.658 "zone_append": false, 00:11:01.658 "compare": false, 00:11:01.658 "compare_and_write": false, 00:11:01.658 "abort": true, 00:11:01.658 "seek_hole": false, 00:11:01.658 "seek_data": false, 00:11:01.658 "copy": true, 00:11:01.658 "nvme_iov_md": false 00:11:01.658 }, 00:11:01.658 "memory_domains": [ 00:11:01.658 { 00:11:01.658 "dma_device_id": "system", 00:11:01.658 "dma_device_type": 1 00:11:01.658 }, 00:11:01.658 { 00:11:01.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.658 "dma_device_type": 2 00:11:01.658 } 00:11:01.658 ], 00:11:01.658 "driver_specific": {} 00:11:01.658 }' 00:11:01.658 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.658 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.916 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.916 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.916 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:02.174 [2024-07-15 18:24:54.319427] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.174 [2024-07-15 18:24:54.319460] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.174 [2024-07-15 18:24:54.319475] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.174 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.431 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:02.431 "name": "Existed_Raid", 00:11:02.431 "uuid": "85674db8-42d7-11ef-9ade-d5fc5159efa5", 00:11:02.431 "strip_size_kb": 64, 00:11:02.431 "state": "offline", 00:11:02.431 "raid_level": "concat", 00:11:02.431 "superblock": true, 00:11:02.431 "num_base_bdevs": 3, 00:11:02.431 "num_base_bdevs_discovered": 2, 00:11:02.431 "num_base_bdevs_operational": 2, 00:11:02.431 "base_bdevs_list": [ 00:11:02.431 { 00:11:02.431 "name": null, 00:11:02.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.431 "is_configured": false, 00:11:02.431 "data_offset": 2048, 00:11:02.431 "data_size": 63488 00:11:02.431 }, 00:11:02.431 { 00:11:02.431 "name": "BaseBdev2", 00:11:02.431 "uuid": "85f58253-42d7-11ef-9ade-d5fc5159efa5", 00:11:02.431 "is_configured": true, 00:11:02.431 "data_offset": 2048, 00:11:02.431 "data_size": 63488 00:11:02.431 }, 00:11:02.431 { 00:11:02.431 "name": "BaseBdev3", 00:11:02.431 "uuid": "86bb4110-42d7-11ef-9ade-d5fc5159efa5", 00:11:02.431 "is_configured": true, 00:11:02.431 "data_offset": 2048, 00:11:02.431 "data_size": 63488 00:11:02.431 } 00:11:02.431 ] 00:11:02.431 }' 00:11:02.431 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:02.431 18:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:02.689 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:02.689 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.689 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:02.948 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:02.948 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:02.948 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:03.207 [2024-07-15 18:24:55.465246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:03.207 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:03.207 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:03.207 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.207 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:03.466 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:03.466 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.466 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:03.724 [2024-07-15 18:24:55.981311] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:03.724 [2024-07-15 18:24:55.981343] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x169064434a00 name Existed_Raid, state offline 00:11:03.725 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:03.725 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:03.725 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.725 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:03.983 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:03.983 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:03.983 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:03.983 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:03.983 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:03.983 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.242 BaseBdev2 00:11:04.242 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:04.242 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:04.242 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:04.242 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:04.242 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:04.242 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:04.242 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:04.501 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.760 [ 00:11:04.760 { 00:11:04.760 "name": "BaseBdev2", 00:11:04.760 "aliases": [ 00:11:04.760 "899643c3-42d7-11ef-9ade-d5fc5159efa5" 00:11:04.760 ], 00:11:04.760 "product_name": "Malloc disk", 00:11:04.760 "block_size": 512, 00:11:04.760 "num_blocks": 65536, 00:11:04.760 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:04.760 "assigned_rate_limits": { 00:11:04.760 "rw_ios_per_sec": 0, 00:11:04.760 "rw_mbytes_per_sec": 0, 00:11:04.760 "r_mbytes_per_sec": 0, 00:11:04.760 "w_mbytes_per_sec": 0 00:11:04.760 }, 00:11:04.760 "claimed": false, 00:11:04.760 "zoned": false, 00:11:04.760 "supported_io_types": { 00:11:04.760 "read": true, 00:11:04.760 "write": true, 00:11:04.760 "unmap": true, 00:11:04.760 "flush": true, 00:11:04.760 "reset": true, 00:11:04.760 "nvme_admin": false, 00:11:04.760 "nvme_io": false, 00:11:04.760 "nvme_io_md": false, 00:11:04.760 "write_zeroes": true, 00:11:04.760 "zcopy": true, 00:11:04.760 "get_zone_info": false, 00:11:04.760 "zone_management": false, 00:11:04.760 "zone_append": false, 00:11:04.760 "compare": false, 00:11:04.760 "compare_and_write": false, 00:11:04.760 "abort": true, 00:11:04.760 "seek_hole": false, 00:11:04.760 "seek_data": false, 00:11:04.760 "copy": true, 00:11:04.760 "nvme_iov_md": false 00:11:04.760 }, 00:11:04.760 "memory_domains": [ 00:11:04.760 { 00:11:04.760 "dma_device_id": "system", 00:11:04.760 "dma_device_type": 1 00:11:04.760 }, 00:11:04.760 { 00:11:04.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.760 "dma_device_type": 2 00:11:04.760 } 00:11:04.760 ], 00:11:04.760 "driver_specific": {} 00:11:04.760 } 00:11:04.760 ] 00:11:04.760 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:04.760 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:04.760 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:04.760 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.019 BaseBdev3 00:11:05.019 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:05.019 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:05.019 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:05.019 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:05.019 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:05.019 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:05.019 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:05.277 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:05.536 [ 00:11:05.536 { 00:11:05.536 "name": "BaseBdev3", 00:11:05.536 "aliases": [ 00:11:05.536 "8a024b2f-42d7-11ef-9ade-d5fc5159efa5" 00:11:05.536 ], 00:11:05.536 "product_name": "Malloc disk", 00:11:05.536 "block_size": 512, 00:11:05.536 "num_blocks": 65536, 00:11:05.536 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:05.536 "assigned_rate_limits": { 00:11:05.536 "rw_ios_per_sec": 0, 00:11:05.536 "rw_mbytes_per_sec": 0, 00:11:05.536 "r_mbytes_per_sec": 0, 00:11:05.536 "w_mbytes_per_sec": 0 00:11:05.536 }, 00:11:05.536 "claimed": false, 00:11:05.536 "zoned": false, 00:11:05.536 "supported_io_types": { 00:11:05.536 "read": true, 00:11:05.536 "write": true, 00:11:05.536 "unmap": true, 00:11:05.536 "flush": true, 00:11:05.536 "reset": true, 00:11:05.536 "nvme_admin": false, 00:11:05.536 "nvme_io": false, 00:11:05.536 "nvme_io_md": false, 00:11:05.536 "write_zeroes": true, 00:11:05.536 "zcopy": true, 00:11:05.536 "get_zone_info": false, 00:11:05.536 "zone_management": false, 00:11:05.536 "zone_append": false, 00:11:05.536 "compare": false, 00:11:05.536 "compare_and_write": false, 00:11:05.536 "abort": true, 00:11:05.536 "seek_hole": false, 00:11:05.536 "seek_data": false, 00:11:05.536 "copy": true, 00:11:05.536 "nvme_iov_md": false 00:11:05.536 }, 00:11:05.536 "memory_domains": [ 00:11:05.536 { 00:11:05.536 "dma_device_id": "system", 00:11:05.536 "dma_device_type": 1 00:11:05.536 }, 00:11:05.536 { 00:11:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.536 "dma_device_type": 2 00:11:05.536 } 00:11:05.536 ], 00:11:05.536 "driver_specific": {} 00:11:05.536 } 00:11:05.536 ] 00:11:05.537 18:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:05.537 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:05.537 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:05.537 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:05.795 [2024-07-15 18:24:58.042249] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.795 [2024-07-15 18:24:58.042301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.795 [2024-07-15 18:24:58.042311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.795 [2024-07-15 18:24:58.042860] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:05.795 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:05.796 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:05.796 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.796 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.054 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.054 "name": "Existed_Raid", 00:11:06.054 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:06.054 "strip_size_kb": 64, 00:11:06.054 "state": "configuring", 00:11:06.054 "raid_level": "concat", 00:11:06.054 "superblock": true, 00:11:06.054 "num_base_bdevs": 3, 00:11:06.054 "num_base_bdevs_discovered": 2, 00:11:06.054 "num_base_bdevs_operational": 3, 00:11:06.054 "base_bdevs_list": [ 00:11:06.054 { 00:11:06.054 "name": "BaseBdev1", 00:11:06.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.054 "is_configured": false, 00:11:06.054 "data_offset": 0, 00:11:06.054 "data_size": 0 00:11:06.054 }, 00:11:06.054 { 00:11:06.054 "name": "BaseBdev2", 00:11:06.054 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:06.054 "is_configured": true, 00:11:06.054 "data_offset": 2048, 00:11:06.054 "data_size": 63488 00:11:06.054 }, 00:11:06.054 { 00:11:06.054 "name": "BaseBdev3", 00:11:06.054 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:06.054 "is_configured": true, 00:11:06.054 "data_offset": 2048, 00:11:06.054 "data_size": 63488 00:11:06.054 } 00:11:06.054 ] 00:11:06.054 }' 00:11:06.054 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.054 18:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.313 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:06.572 [2024-07-15 18:24:58.890224] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.572 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.832 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.832 "name": "Existed_Raid", 00:11:06.832 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:06.832 "strip_size_kb": 64, 00:11:06.832 "state": "configuring", 00:11:06.832 "raid_level": "concat", 00:11:06.832 "superblock": true, 00:11:06.832 "num_base_bdevs": 3, 00:11:06.832 "num_base_bdevs_discovered": 1, 00:11:06.832 "num_base_bdevs_operational": 3, 00:11:06.832 "base_bdevs_list": [ 00:11:06.832 { 00:11:06.832 "name": "BaseBdev1", 00:11:06.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.832 "is_configured": false, 00:11:06.832 "data_offset": 0, 00:11:06.832 "data_size": 0 00:11:06.832 }, 00:11:06.832 { 00:11:06.832 "name": null, 00:11:06.832 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:06.832 "is_configured": false, 00:11:06.832 "data_offset": 2048, 00:11:06.832 "data_size": 63488 00:11:06.832 }, 00:11:06.832 { 00:11:06.832 "name": "BaseBdev3", 00:11:06.832 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:06.832 "is_configured": true, 00:11:06.832 "data_offset": 2048, 00:11:06.832 "data_size": 63488 00:11:06.832 } 00:11:06.832 ] 00:11:06.832 }' 00:11:06.832 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.832 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.410 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.410 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.410 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:07.410 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.668 [2024-07-15 18:24:59.962328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.668 BaseBdev1 00:11:07.668 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:07.668 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:07.668 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:07.668 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:07.668 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:07.668 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:07.668 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:07.927 18:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.186 [ 00:11:08.186 { 00:11:08.186 "name": "BaseBdev1", 00:11:08.186 "aliases": [ 00:11:08.186 "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5" 00:11:08.186 ], 00:11:08.186 "product_name": "Malloc disk", 00:11:08.186 "block_size": 512, 00:11:08.186 "num_blocks": 65536, 00:11:08.186 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:08.186 "assigned_rate_limits": { 00:11:08.186 "rw_ios_per_sec": 0, 00:11:08.186 "rw_mbytes_per_sec": 0, 00:11:08.186 "r_mbytes_per_sec": 0, 00:11:08.186 "w_mbytes_per_sec": 0 00:11:08.186 }, 00:11:08.186 "claimed": true, 00:11:08.186 "claim_type": "exclusive_write", 00:11:08.186 "zoned": false, 00:11:08.186 "supported_io_types": { 00:11:08.186 "read": true, 00:11:08.186 "write": true, 00:11:08.186 "unmap": true, 00:11:08.186 "flush": true, 00:11:08.186 "reset": true, 00:11:08.186 "nvme_admin": false, 00:11:08.186 "nvme_io": false, 00:11:08.186 "nvme_io_md": false, 00:11:08.186 "write_zeroes": true, 00:11:08.186 "zcopy": true, 00:11:08.186 "get_zone_info": false, 00:11:08.186 "zone_management": false, 00:11:08.186 "zone_append": false, 00:11:08.186 "compare": false, 00:11:08.186 "compare_and_write": false, 00:11:08.186 "abort": true, 00:11:08.186 "seek_hole": false, 00:11:08.186 "seek_data": false, 00:11:08.186 "copy": true, 00:11:08.186 "nvme_iov_md": false 00:11:08.186 }, 00:11:08.186 "memory_domains": [ 00:11:08.186 { 00:11:08.186 "dma_device_id": "system", 00:11:08.186 "dma_device_type": 1 00:11:08.186 }, 00:11:08.186 { 00:11:08.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.186 "dma_device_type": 2 00:11:08.186 } 00:11:08.186 ], 00:11:08.186 "driver_specific": {} 00:11:08.186 } 00:11:08.186 ] 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.186 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.445 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.445 "name": "Existed_Raid", 00:11:08.445 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:08.445 "strip_size_kb": 64, 00:11:08.445 "state": "configuring", 00:11:08.445 "raid_level": "concat", 00:11:08.445 "superblock": true, 00:11:08.445 "num_base_bdevs": 3, 00:11:08.445 "num_base_bdevs_discovered": 2, 00:11:08.445 "num_base_bdevs_operational": 3, 00:11:08.445 "base_bdevs_list": [ 00:11:08.445 { 00:11:08.445 "name": "BaseBdev1", 00:11:08.445 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:08.445 "is_configured": true, 00:11:08.445 "data_offset": 2048, 00:11:08.445 "data_size": 63488 00:11:08.445 }, 00:11:08.446 { 00:11:08.446 "name": null, 00:11:08.446 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:08.446 "is_configured": false, 00:11:08.446 "data_offset": 2048, 00:11:08.446 "data_size": 63488 00:11:08.446 }, 00:11:08.446 { 00:11:08.446 "name": "BaseBdev3", 00:11:08.446 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:08.446 "is_configured": true, 00:11:08.446 "data_offset": 2048, 00:11:08.446 "data_size": 63488 00:11:08.446 } 00:11:08.446 ] 00:11:08.446 }' 00:11:08.446 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.446 18:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.705 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.963 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:08.963 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:09.222 [2024-07-15 18:25:01.542175] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.222 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.480 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.480 "name": "Existed_Raid", 00:11:09.480 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:09.480 "strip_size_kb": 64, 00:11:09.480 "state": "configuring", 00:11:09.480 "raid_level": "concat", 00:11:09.480 "superblock": true, 00:11:09.480 "num_base_bdevs": 3, 00:11:09.480 "num_base_bdevs_discovered": 1, 00:11:09.480 "num_base_bdevs_operational": 3, 00:11:09.480 "base_bdevs_list": [ 00:11:09.480 { 00:11:09.480 "name": "BaseBdev1", 00:11:09.480 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:09.480 "is_configured": true, 00:11:09.480 "data_offset": 2048, 00:11:09.480 "data_size": 63488 00:11:09.480 }, 00:11:09.480 { 00:11:09.480 "name": null, 00:11:09.480 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:09.480 "is_configured": false, 00:11:09.480 "data_offset": 2048, 00:11:09.480 "data_size": 63488 00:11:09.480 }, 00:11:09.480 { 00:11:09.480 "name": null, 00:11:09.480 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:09.480 "is_configured": false, 00:11:09.480 "data_offset": 2048, 00:11:09.480 "data_size": 63488 00:11:09.480 } 00:11:09.480 ] 00:11:09.480 }' 00:11:09.480 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.480 18:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.046 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.046 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.046 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:10.046 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:10.304 [2024-07-15 18:25:02.670170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.562 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.821 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:10.821 "name": "Existed_Raid", 00:11:10.821 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:10.821 "strip_size_kb": 64, 00:11:10.821 "state": "configuring", 00:11:10.821 "raid_level": "concat", 00:11:10.821 "superblock": true, 00:11:10.821 "num_base_bdevs": 3, 00:11:10.821 "num_base_bdevs_discovered": 2, 00:11:10.821 "num_base_bdevs_operational": 3, 00:11:10.821 "base_bdevs_list": [ 00:11:10.821 { 00:11:10.821 "name": "BaseBdev1", 00:11:10.821 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:10.821 "is_configured": true, 00:11:10.821 "data_offset": 2048, 00:11:10.821 "data_size": 63488 00:11:10.821 }, 00:11:10.821 { 00:11:10.821 "name": null, 00:11:10.821 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:10.821 "is_configured": false, 00:11:10.821 "data_offset": 2048, 00:11:10.821 "data_size": 63488 00:11:10.821 }, 00:11:10.821 { 00:11:10.821 "name": "BaseBdev3", 00:11:10.821 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:10.821 "is_configured": true, 00:11:10.821 "data_offset": 2048, 00:11:10.821 "data_size": 63488 00:11:10.821 } 00:11:10.821 ] 00:11:10.821 }' 00:11:10.821 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:10.821 18:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.084 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.084 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.343 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:11.343 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:11.603 [2024-07-15 18:25:03.746168] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.603 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.863 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.863 "name": "Existed_Raid", 00:11:11.863 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:11.863 "strip_size_kb": 64, 00:11:11.863 "state": "configuring", 00:11:11.863 "raid_level": "concat", 00:11:11.863 "superblock": true, 00:11:11.863 "num_base_bdevs": 3, 00:11:11.863 "num_base_bdevs_discovered": 1, 00:11:11.863 "num_base_bdevs_operational": 3, 00:11:11.863 "base_bdevs_list": [ 00:11:11.863 { 00:11:11.863 "name": null, 00:11:11.863 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:11.863 "is_configured": false, 00:11:11.863 "data_offset": 2048, 00:11:11.863 "data_size": 63488 00:11:11.863 }, 00:11:11.863 { 00:11:11.863 "name": null, 00:11:11.863 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:11.863 "is_configured": false, 00:11:11.863 "data_offset": 2048, 00:11:11.863 "data_size": 63488 00:11:11.863 }, 00:11:11.863 { 00:11:11.863 "name": "BaseBdev3", 00:11:11.863 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:11.863 "is_configured": true, 00:11:11.863 "data_offset": 2048, 00:11:11.863 "data_size": 63488 00:11:11.863 } 00:11:11.863 ] 00:11:11.863 }' 00:11:11.863 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.863 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.121 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.121 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.380 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:12.380 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.638 [2024-07-15 18:25:04.882212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.638 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.896 18:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:12.896 "name": "Existed_Raid", 00:11:12.896 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:12.896 "strip_size_kb": 64, 00:11:12.896 "state": "configuring", 00:11:12.896 "raid_level": "concat", 00:11:12.896 "superblock": true, 00:11:12.896 "num_base_bdevs": 3, 00:11:12.896 "num_base_bdevs_discovered": 2, 00:11:12.896 "num_base_bdevs_operational": 3, 00:11:12.896 "base_bdevs_list": [ 00:11:12.896 { 00:11:12.896 "name": null, 00:11:12.896 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:12.896 "is_configured": false, 00:11:12.896 "data_offset": 2048, 00:11:12.896 "data_size": 63488 00:11:12.896 }, 00:11:12.896 { 00:11:12.896 "name": "BaseBdev2", 00:11:12.896 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:12.896 "is_configured": true, 00:11:12.896 "data_offset": 2048, 00:11:12.896 "data_size": 63488 00:11:12.896 }, 00:11:12.896 { 00:11:12.896 "name": "BaseBdev3", 00:11:12.896 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:12.896 "is_configured": true, 00:11:12.896 "data_offset": 2048, 00:11:12.896 "data_size": 63488 00:11:12.896 } 00:11:12.896 ] 00:11:12.896 }' 00:11:12.896 18:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:12.896 18:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.154 18:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.154 18:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.411 18:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:13.411 18:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.411 18:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:13.975 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5 00:11:13.975 [2024-07-15 18:25:06.302350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:13.975 [2024-07-15 18:25:06.302401] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x169064434a00 00:11:13.975 [2024-07-15 18:25:06.302406] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:13.975 [2024-07-15 18:25:06.302426] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x169064497e20 00:11:13.975 [2024-07-15 18:25:06.302473] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x169064434a00 00:11:13.975 [2024-07-15 18:25:06.302478] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x169064434a00 00:11:13.975 [2024-07-15 18:25:06.302499] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.975 NewBaseBdev 00:11:13.975 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:13.975 18:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:13.975 18:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:13.975 18:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:13.975 18:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:13.975 18:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:13.975 18:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:14.232 18:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:14.490 [ 00:11:14.490 { 00:11:14.490 "name": "NewBaseBdev", 00:11:14.490 "aliases": [ 00:11:14.490 "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5" 00:11:14.490 ], 00:11:14.490 "product_name": "Malloc disk", 00:11:14.490 "block_size": 512, 00:11:14.490 "num_blocks": 65536, 00:11:14.490 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:14.490 "assigned_rate_limits": { 00:11:14.490 "rw_ios_per_sec": 0, 00:11:14.490 "rw_mbytes_per_sec": 0, 00:11:14.490 "r_mbytes_per_sec": 0, 00:11:14.490 "w_mbytes_per_sec": 0 00:11:14.490 }, 00:11:14.490 "claimed": true, 00:11:14.490 "claim_type": "exclusive_write", 00:11:14.490 "zoned": false, 00:11:14.490 "supported_io_types": { 00:11:14.490 "read": true, 00:11:14.490 "write": true, 00:11:14.490 "unmap": true, 00:11:14.490 "flush": true, 00:11:14.490 "reset": true, 00:11:14.490 "nvme_admin": false, 00:11:14.490 "nvme_io": false, 00:11:14.490 "nvme_io_md": false, 00:11:14.490 "write_zeroes": true, 00:11:14.490 "zcopy": true, 00:11:14.490 "get_zone_info": false, 00:11:14.490 "zone_management": false, 00:11:14.490 "zone_append": false, 00:11:14.490 "compare": false, 00:11:14.490 "compare_and_write": false, 00:11:14.490 "abort": true, 00:11:14.490 "seek_hole": false, 00:11:14.490 "seek_data": false, 00:11:14.490 "copy": true, 00:11:14.490 "nvme_iov_md": false 00:11:14.490 }, 00:11:14.490 "memory_domains": [ 00:11:14.490 { 00:11:14.490 "dma_device_id": "system", 00:11:14.490 "dma_device_type": 1 00:11:14.490 }, 00:11:14.490 { 00:11:14.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.490 "dma_device_type": 2 00:11:14.490 } 00:11:14.490 ], 00:11:14.490 "driver_specific": {} 00:11:14.490 } 00:11:14.490 ] 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.490 18:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.747 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:14.747 "name": "Existed_Raid", 00:11:14.747 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:14.747 "strip_size_kb": 64, 00:11:14.747 "state": "online", 00:11:14.747 "raid_level": "concat", 00:11:14.747 "superblock": true, 00:11:14.747 "num_base_bdevs": 3, 00:11:14.747 "num_base_bdevs_discovered": 3, 00:11:14.747 "num_base_bdevs_operational": 3, 00:11:14.747 "base_bdevs_list": [ 00:11:14.747 { 00:11:14.747 "name": "NewBaseBdev", 00:11:14.747 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:14.747 "is_configured": true, 00:11:14.747 "data_offset": 2048, 00:11:14.747 "data_size": 63488 00:11:14.747 }, 00:11:14.747 { 00:11:14.747 "name": "BaseBdev2", 00:11:14.747 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:14.747 "is_configured": true, 00:11:14.747 "data_offset": 2048, 00:11:14.747 "data_size": 63488 00:11:14.747 }, 00:11:14.747 { 00:11:14.747 "name": "BaseBdev3", 00:11:14.747 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:14.747 "is_configured": true, 00:11:14.747 "data_offset": 2048, 00:11:14.747 "data_size": 63488 00:11:14.747 } 00:11:14.747 ] 00:11:14.747 }' 00:11:14.747 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:14.747 18:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.004 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.004 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:15.004 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:15.004 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:15.004 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:15.004 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:15.004 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:15.004 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:15.261 [2024-07-15 18:25:07.586254] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.261 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:15.261 "name": "Existed_Raid", 00:11:15.262 "aliases": [ 00:11:15.262 "8a79eb14-42d7-11ef-9ade-d5fc5159efa5" 00:11:15.262 ], 00:11:15.262 "product_name": "Raid Volume", 00:11:15.262 "block_size": 512, 00:11:15.262 "num_blocks": 190464, 00:11:15.262 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:15.262 "assigned_rate_limits": { 00:11:15.262 "rw_ios_per_sec": 0, 00:11:15.262 "rw_mbytes_per_sec": 0, 00:11:15.262 "r_mbytes_per_sec": 0, 00:11:15.262 "w_mbytes_per_sec": 0 00:11:15.262 }, 00:11:15.262 "claimed": false, 00:11:15.262 "zoned": false, 00:11:15.262 "supported_io_types": { 00:11:15.262 "read": true, 00:11:15.262 "write": true, 00:11:15.262 "unmap": true, 00:11:15.262 "flush": true, 00:11:15.262 "reset": true, 00:11:15.262 "nvme_admin": false, 00:11:15.262 "nvme_io": false, 00:11:15.262 "nvme_io_md": false, 00:11:15.262 "write_zeroes": true, 00:11:15.262 "zcopy": false, 00:11:15.262 "get_zone_info": false, 00:11:15.262 "zone_management": false, 00:11:15.262 "zone_append": false, 00:11:15.262 "compare": false, 00:11:15.262 "compare_and_write": false, 00:11:15.262 "abort": false, 00:11:15.262 "seek_hole": false, 00:11:15.262 "seek_data": false, 00:11:15.262 "copy": false, 00:11:15.262 "nvme_iov_md": false 00:11:15.262 }, 00:11:15.262 "memory_domains": [ 00:11:15.262 { 00:11:15.262 "dma_device_id": "system", 00:11:15.262 "dma_device_type": 1 00:11:15.262 }, 00:11:15.262 { 00:11:15.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.262 "dma_device_type": 2 00:11:15.262 }, 00:11:15.262 { 00:11:15.262 "dma_device_id": "system", 00:11:15.262 "dma_device_type": 1 00:11:15.262 }, 00:11:15.262 { 00:11:15.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.262 "dma_device_type": 2 00:11:15.262 }, 00:11:15.262 { 00:11:15.262 "dma_device_id": "system", 00:11:15.262 "dma_device_type": 1 00:11:15.262 }, 00:11:15.262 { 00:11:15.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.262 "dma_device_type": 2 00:11:15.262 } 00:11:15.262 ], 00:11:15.262 "driver_specific": { 00:11:15.262 "raid": { 00:11:15.262 "uuid": "8a79eb14-42d7-11ef-9ade-d5fc5159efa5", 00:11:15.262 "strip_size_kb": 64, 00:11:15.262 "state": "online", 00:11:15.262 "raid_level": "concat", 00:11:15.262 "superblock": true, 00:11:15.262 "num_base_bdevs": 3, 00:11:15.262 "num_base_bdevs_discovered": 3, 00:11:15.262 "num_base_bdevs_operational": 3, 00:11:15.262 "base_bdevs_list": [ 00:11:15.262 { 00:11:15.262 "name": "NewBaseBdev", 00:11:15.262 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:15.262 "is_configured": true, 00:11:15.262 "data_offset": 2048, 00:11:15.262 "data_size": 63488 00:11:15.262 }, 00:11:15.262 { 00:11:15.262 "name": "BaseBdev2", 00:11:15.262 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:15.262 "is_configured": true, 00:11:15.262 "data_offset": 2048, 00:11:15.262 "data_size": 63488 00:11:15.262 }, 00:11:15.262 { 00:11:15.262 "name": "BaseBdev3", 00:11:15.262 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:15.262 "is_configured": true, 00:11:15.262 "data_offset": 2048, 00:11:15.262 "data_size": 63488 00:11:15.262 } 00:11:15.262 ] 00:11:15.262 } 00:11:15.262 } 00:11:15.262 }' 00:11:15.262 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.262 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:15.262 BaseBdev2 00:11:15.262 BaseBdev3' 00:11:15.262 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:15.262 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:15.262 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:15.520 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:15.520 "name": "NewBaseBdev", 00:11:15.520 "aliases": [ 00:11:15.520 "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5" 00:11:15.520 ], 00:11:15.520 "product_name": "Malloc disk", 00:11:15.520 "block_size": 512, 00:11:15.520 "num_blocks": 65536, 00:11:15.520 "uuid": "8b9ee1d7-42d7-11ef-9ade-d5fc5159efa5", 00:11:15.520 "assigned_rate_limits": { 00:11:15.520 "rw_ios_per_sec": 0, 00:11:15.520 "rw_mbytes_per_sec": 0, 00:11:15.520 "r_mbytes_per_sec": 0, 00:11:15.520 "w_mbytes_per_sec": 0 00:11:15.520 }, 00:11:15.520 "claimed": true, 00:11:15.520 "claim_type": "exclusive_write", 00:11:15.520 "zoned": false, 00:11:15.520 "supported_io_types": { 00:11:15.520 "read": true, 00:11:15.520 "write": true, 00:11:15.520 "unmap": true, 00:11:15.520 "flush": true, 00:11:15.520 "reset": true, 00:11:15.520 "nvme_admin": false, 00:11:15.520 "nvme_io": false, 00:11:15.520 "nvme_io_md": false, 00:11:15.520 "write_zeroes": true, 00:11:15.520 "zcopy": true, 00:11:15.520 "get_zone_info": false, 00:11:15.520 "zone_management": false, 00:11:15.520 "zone_append": false, 00:11:15.520 "compare": false, 00:11:15.520 "compare_and_write": false, 00:11:15.520 "abort": true, 00:11:15.520 "seek_hole": false, 00:11:15.520 "seek_data": false, 00:11:15.520 "copy": true, 00:11:15.520 "nvme_iov_md": false 00:11:15.520 }, 00:11:15.520 "memory_domains": [ 00:11:15.520 { 00:11:15.520 "dma_device_id": "system", 00:11:15.520 "dma_device_type": 1 00:11:15.520 }, 00:11:15.520 { 00:11:15.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.520 "dma_device_type": 2 00:11:15.520 } 00:11:15.520 ], 00:11:15.520 "driver_specific": {} 00:11:15.520 }' 00:11:15.520 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:15.520 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:15.520 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:15.520 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:15.520 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:15.520 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:15.520 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:15.778 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:15.778 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:15.778 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:15.778 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:15.778 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:15.778 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:15.778 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:15.778 18:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:16.037 "name": "BaseBdev2", 00:11:16.037 "aliases": [ 00:11:16.037 "899643c3-42d7-11ef-9ade-d5fc5159efa5" 00:11:16.037 ], 00:11:16.037 "product_name": "Malloc disk", 00:11:16.037 "block_size": 512, 00:11:16.037 "num_blocks": 65536, 00:11:16.037 "uuid": "899643c3-42d7-11ef-9ade-d5fc5159efa5", 00:11:16.037 "assigned_rate_limits": { 00:11:16.037 "rw_ios_per_sec": 0, 00:11:16.037 "rw_mbytes_per_sec": 0, 00:11:16.037 "r_mbytes_per_sec": 0, 00:11:16.037 "w_mbytes_per_sec": 0 00:11:16.037 }, 00:11:16.037 "claimed": true, 00:11:16.037 "claim_type": "exclusive_write", 00:11:16.037 "zoned": false, 00:11:16.037 "supported_io_types": { 00:11:16.037 "read": true, 00:11:16.037 "write": true, 00:11:16.037 "unmap": true, 00:11:16.037 "flush": true, 00:11:16.037 "reset": true, 00:11:16.037 "nvme_admin": false, 00:11:16.037 "nvme_io": false, 00:11:16.037 "nvme_io_md": false, 00:11:16.037 "write_zeroes": true, 00:11:16.037 "zcopy": true, 00:11:16.037 "get_zone_info": false, 00:11:16.037 "zone_management": false, 00:11:16.037 "zone_append": false, 00:11:16.037 "compare": false, 00:11:16.037 "compare_and_write": false, 00:11:16.037 "abort": true, 00:11:16.037 "seek_hole": false, 00:11:16.037 "seek_data": false, 00:11:16.037 "copy": true, 00:11:16.037 "nvme_iov_md": false 00:11:16.037 }, 00:11:16.037 "memory_domains": [ 00:11:16.037 { 00:11:16.037 "dma_device_id": "system", 00:11:16.037 "dma_device_type": 1 00:11:16.037 }, 00:11:16.037 { 00:11:16.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.037 "dma_device_type": 2 00:11:16.037 } 00:11:16.037 ], 00:11:16.037 "driver_specific": {} 00:11:16.037 }' 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:16.037 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:16.295 "name": "BaseBdev3", 00:11:16.295 "aliases": [ 00:11:16.295 "8a024b2f-42d7-11ef-9ade-d5fc5159efa5" 00:11:16.295 ], 00:11:16.295 "product_name": "Malloc disk", 00:11:16.295 "block_size": 512, 00:11:16.295 "num_blocks": 65536, 00:11:16.295 "uuid": "8a024b2f-42d7-11ef-9ade-d5fc5159efa5", 00:11:16.295 "assigned_rate_limits": { 00:11:16.295 "rw_ios_per_sec": 0, 00:11:16.295 "rw_mbytes_per_sec": 0, 00:11:16.295 "r_mbytes_per_sec": 0, 00:11:16.295 "w_mbytes_per_sec": 0 00:11:16.295 }, 00:11:16.295 "claimed": true, 00:11:16.295 "claim_type": "exclusive_write", 00:11:16.295 "zoned": false, 00:11:16.295 "supported_io_types": { 00:11:16.295 "read": true, 00:11:16.295 "write": true, 00:11:16.295 "unmap": true, 00:11:16.295 "flush": true, 00:11:16.295 "reset": true, 00:11:16.295 "nvme_admin": false, 00:11:16.295 "nvme_io": false, 00:11:16.295 "nvme_io_md": false, 00:11:16.295 "write_zeroes": true, 00:11:16.295 "zcopy": true, 00:11:16.295 "get_zone_info": false, 00:11:16.295 "zone_management": false, 00:11:16.295 "zone_append": false, 00:11:16.295 "compare": false, 00:11:16.295 "compare_and_write": false, 00:11:16.295 "abort": true, 00:11:16.295 "seek_hole": false, 00:11:16.295 "seek_data": false, 00:11:16.295 "copy": true, 00:11:16.295 "nvme_iov_md": false 00:11:16.295 }, 00:11:16.295 "memory_domains": [ 00:11:16.295 { 00:11:16.295 "dma_device_id": "system", 00:11:16.295 "dma_device_type": 1 00:11:16.295 }, 00:11:16.295 { 00:11:16.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.295 "dma_device_type": 2 00:11:16.295 } 00:11:16.295 ], 00:11:16.295 "driver_specific": {} 00:11:16.295 }' 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:16.295 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:16.602 [2024-07-15 18:25:08.846236] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.602 [2024-07-15 18:25:08.846272] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.602 [2024-07-15 18:25:08.846304] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.602 [2024-07-15 18:25:08.846318] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.602 [2024-07-15 18:25:08.846322] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x169064434a00 name Existed_Raid, state offline 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 54781 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 54781 ']' 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 54781 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 54781 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:16.602 killing process with pid 54781 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54781' 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 54781 00:11:16.602 [2024-07-15 18:25:08.874582] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.602 18:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 54781 00:11:16.602 [2024-07-15 18:25:08.898115] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.861 18:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:16.861 00:11:16.861 real 0m23.991s 00:11:16.861 user 0m43.771s 00:11:16.861 sys 0m3.324s 00:11:16.861 ************************************ 00:11:16.861 END TEST raid_state_function_test_sb 00:11:16.861 ************************************ 00:11:16.861 18:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.861 18:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.861 18:25:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:16.861 18:25:09 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:16.861 18:25:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:16.861 18:25:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.861 18:25:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.861 ************************************ 00:11:16.861 START TEST raid_superblock_test 00:11:16.861 ************************************ 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=55505 00:11:16.861 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 55505 /var/tmp/spdk-raid.sock 00:11:16.862 18:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 55505 ']' 00:11:16.862 18:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:16.862 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:16.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:16.862 18:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.862 18:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:16.862 18:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.862 18:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.862 [2024-07-15 18:25:09.191597] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:11:16.862 [2024-07-15 18:25:09.191770] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:17.429 EAL: TSC is not safe to use in SMP mode 00:11:17.429 EAL: TSC is not invariant 00:11:17.429 [2024-07-15 18:25:09.806844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.688 [2024-07-15 18:25:09.919680] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:17.688 [2024-07-15 18:25:09.921915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.688 [2024-07-15 18:25:09.922747] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.688 [2024-07-15 18:25:09.922762] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.946 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:18.204 malloc1 00:11:18.204 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.461 [2024-07-15 18:25:10.742770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.462 [2024-07-15 18:25:10.742847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.462 [2024-07-15 18:25:10.742861] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168709034780 00:11:18.462 [2024-07-15 18:25:10.742870] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.462 [2024-07-15 18:25:10.743864] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.462 [2024-07-15 18:25:10.743890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.462 pt1 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:18.462 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:18.720 malloc2 00:11:18.720 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.978 [2024-07-15 18:25:11.266779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.978 [2024-07-15 18:25:11.266843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.978 [2024-07-15 18:25:11.266857] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168709034c80 00:11:18.978 [2024-07-15 18:25:11.266866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.978 [2024-07-15 18:25:11.267596] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.978 [2024-07-15 18:25:11.267623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.978 pt2 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:18.978 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:19.236 malloc3 00:11:19.236 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:19.495 [2024-07-15 18:25:11.742777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:19.495 [2024-07-15 18:25:11.742834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.495 [2024-07-15 18:25:11.742847] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168709035180 00:11:19.495 [2024-07-15 18:25:11.742856] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.495 [2024-07-15 18:25:11.743604] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.495 [2024-07-15 18:25:11.743630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:19.495 pt3 00:11:19.495 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:19.495 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:19.495 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:19.753 [2024-07-15 18:25:12.082818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:19.753 [2024-07-15 18:25:12.083502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:19.753 [2024-07-15 18:25:12.083526] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:19.753 [2024-07-15 18:25:12.083578] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x168709035400 00:11:19.753 [2024-07-15 18:25:12.083584] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:19.753 [2024-07-15 18:25:12.083620] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x168709097e20 00:11:19.753 [2024-07-15 18:25:12.083698] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x168709035400 00:11:19.753 [2024-07-15 18:25:12.083703] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x168709035400 00:11:19.753 [2024-07-15 18:25:12.083732] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.753 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.012 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:20.012 "name": "raid_bdev1", 00:11:20.012 "uuid": "92d8568e-42d7-11ef-9ade-d5fc5159efa5", 00:11:20.012 "strip_size_kb": 64, 00:11:20.012 "state": "online", 00:11:20.012 "raid_level": "concat", 00:11:20.012 "superblock": true, 00:11:20.012 "num_base_bdevs": 3, 00:11:20.012 "num_base_bdevs_discovered": 3, 00:11:20.012 "num_base_bdevs_operational": 3, 00:11:20.012 "base_bdevs_list": [ 00:11:20.012 { 00:11:20.012 "name": "pt1", 00:11:20.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.012 "is_configured": true, 00:11:20.012 "data_offset": 2048, 00:11:20.012 "data_size": 63488 00:11:20.012 }, 00:11:20.012 { 00:11:20.012 "name": "pt2", 00:11:20.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.012 "is_configured": true, 00:11:20.012 "data_offset": 2048, 00:11:20.012 "data_size": 63488 00:11:20.012 }, 00:11:20.012 { 00:11:20.012 "name": "pt3", 00:11:20.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.012 "is_configured": true, 00:11:20.012 "data_offset": 2048, 00:11:20.012 "data_size": 63488 00:11:20.012 } 00:11:20.012 ] 00:11:20.012 }' 00:11:20.012 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:20.012 18:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.578 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:20.578 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:20.578 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:20.578 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:20.578 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:20.578 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:20.578 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:20.578 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:20.836 [2024-07-15 18:25:12.966851] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.836 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:20.836 "name": "raid_bdev1", 00:11:20.836 "aliases": [ 00:11:20.836 "92d8568e-42d7-11ef-9ade-d5fc5159efa5" 00:11:20.836 ], 00:11:20.836 "product_name": "Raid Volume", 00:11:20.836 "block_size": 512, 00:11:20.836 "num_blocks": 190464, 00:11:20.836 "uuid": "92d8568e-42d7-11ef-9ade-d5fc5159efa5", 00:11:20.836 "assigned_rate_limits": { 00:11:20.836 "rw_ios_per_sec": 0, 00:11:20.836 "rw_mbytes_per_sec": 0, 00:11:20.836 "r_mbytes_per_sec": 0, 00:11:20.836 "w_mbytes_per_sec": 0 00:11:20.836 }, 00:11:20.837 "claimed": false, 00:11:20.837 "zoned": false, 00:11:20.837 "supported_io_types": { 00:11:20.837 "read": true, 00:11:20.837 "write": true, 00:11:20.837 "unmap": true, 00:11:20.837 "flush": true, 00:11:20.837 "reset": true, 00:11:20.837 "nvme_admin": false, 00:11:20.837 "nvme_io": false, 00:11:20.837 "nvme_io_md": false, 00:11:20.837 "write_zeroes": true, 00:11:20.837 "zcopy": false, 00:11:20.837 "get_zone_info": false, 00:11:20.837 "zone_management": false, 00:11:20.837 "zone_append": false, 00:11:20.837 "compare": false, 00:11:20.837 "compare_and_write": false, 00:11:20.837 "abort": false, 00:11:20.837 "seek_hole": false, 00:11:20.837 "seek_data": false, 00:11:20.837 "copy": false, 00:11:20.837 "nvme_iov_md": false 00:11:20.837 }, 00:11:20.837 "memory_domains": [ 00:11:20.837 { 00:11:20.837 "dma_device_id": "system", 00:11:20.837 "dma_device_type": 1 00:11:20.837 }, 00:11:20.837 { 00:11:20.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.837 "dma_device_type": 2 00:11:20.837 }, 00:11:20.837 { 00:11:20.837 "dma_device_id": "system", 00:11:20.837 "dma_device_type": 1 00:11:20.837 }, 00:11:20.837 { 00:11:20.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.837 "dma_device_type": 2 00:11:20.837 }, 00:11:20.837 { 00:11:20.837 "dma_device_id": "system", 00:11:20.837 "dma_device_type": 1 00:11:20.837 }, 00:11:20.837 { 00:11:20.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.837 "dma_device_type": 2 00:11:20.837 } 00:11:20.837 ], 00:11:20.837 "driver_specific": { 00:11:20.837 "raid": { 00:11:20.837 "uuid": "92d8568e-42d7-11ef-9ade-d5fc5159efa5", 00:11:20.837 "strip_size_kb": 64, 00:11:20.837 "state": "online", 00:11:20.837 "raid_level": "concat", 00:11:20.837 "superblock": true, 00:11:20.837 "num_base_bdevs": 3, 00:11:20.837 "num_base_bdevs_discovered": 3, 00:11:20.837 "num_base_bdevs_operational": 3, 00:11:20.837 "base_bdevs_list": [ 00:11:20.837 { 00:11:20.837 "name": "pt1", 00:11:20.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.837 "is_configured": true, 00:11:20.837 "data_offset": 2048, 00:11:20.837 "data_size": 63488 00:11:20.837 }, 00:11:20.837 { 00:11:20.837 "name": "pt2", 00:11:20.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.837 "is_configured": true, 00:11:20.837 "data_offset": 2048, 00:11:20.837 "data_size": 63488 00:11:20.837 }, 00:11:20.837 { 00:11:20.837 "name": "pt3", 00:11:20.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.837 "is_configured": true, 00:11:20.837 "data_offset": 2048, 00:11:20.837 "data_size": 63488 00:11:20.837 } 00:11:20.837 ] 00:11:20.837 } 00:11:20.837 } 00:11:20.837 }' 00:11:20.837 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.837 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:20.837 pt2 00:11:20.837 pt3' 00:11:20.837 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:20.837 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:20.837 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:21.094 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:21.094 "name": "pt1", 00:11:21.094 "aliases": [ 00:11:21.094 "00000000-0000-0000-0000-000000000001" 00:11:21.094 ], 00:11:21.094 "product_name": "passthru", 00:11:21.094 "block_size": 512, 00:11:21.094 "num_blocks": 65536, 00:11:21.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.094 "assigned_rate_limits": { 00:11:21.094 "rw_ios_per_sec": 0, 00:11:21.094 "rw_mbytes_per_sec": 0, 00:11:21.094 "r_mbytes_per_sec": 0, 00:11:21.094 "w_mbytes_per_sec": 0 00:11:21.094 }, 00:11:21.094 "claimed": true, 00:11:21.094 "claim_type": "exclusive_write", 00:11:21.094 "zoned": false, 00:11:21.094 "supported_io_types": { 00:11:21.094 "read": true, 00:11:21.094 "write": true, 00:11:21.094 "unmap": true, 00:11:21.094 "flush": true, 00:11:21.094 "reset": true, 00:11:21.094 "nvme_admin": false, 00:11:21.094 "nvme_io": false, 00:11:21.094 "nvme_io_md": false, 00:11:21.094 "write_zeroes": true, 00:11:21.094 "zcopy": true, 00:11:21.094 "get_zone_info": false, 00:11:21.094 "zone_management": false, 00:11:21.094 "zone_append": false, 00:11:21.094 "compare": false, 00:11:21.094 "compare_and_write": false, 00:11:21.094 "abort": true, 00:11:21.094 "seek_hole": false, 00:11:21.094 "seek_data": false, 00:11:21.094 "copy": true, 00:11:21.094 "nvme_iov_md": false 00:11:21.094 }, 00:11:21.094 "memory_domains": [ 00:11:21.094 { 00:11:21.094 "dma_device_id": "system", 00:11:21.094 "dma_device_type": 1 00:11:21.094 }, 00:11:21.094 { 00:11:21.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.094 "dma_device_type": 2 00:11:21.094 } 00:11:21.094 ], 00:11:21.094 "driver_specific": { 00:11:21.094 "passthru": { 00:11:21.094 "name": "pt1", 00:11:21.095 "base_bdev_name": "malloc1" 00:11:21.095 } 00:11:21.095 } 00:11:21.095 }' 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:21.095 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:21.352 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:21.352 "name": "pt2", 00:11:21.353 "aliases": [ 00:11:21.353 "00000000-0000-0000-0000-000000000002" 00:11:21.353 ], 00:11:21.353 "product_name": "passthru", 00:11:21.353 "block_size": 512, 00:11:21.353 "num_blocks": 65536, 00:11:21.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.353 "assigned_rate_limits": { 00:11:21.353 "rw_ios_per_sec": 0, 00:11:21.353 "rw_mbytes_per_sec": 0, 00:11:21.353 "r_mbytes_per_sec": 0, 00:11:21.353 "w_mbytes_per_sec": 0 00:11:21.353 }, 00:11:21.353 "claimed": true, 00:11:21.353 "claim_type": "exclusive_write", 00:11:21.353 "zoned": false, 00:11:21.353 "supported_io_types": { 00:11:21.353 "read": true, 00:11:21.353 "write": true, 00:11:21.353 "unmap": true, 00:11:21.353 "flush": true, 00:11:21.353 "reset": true, 00:11:21.353 "nvme_admin": false, 00:11:21.353 "nvme_io": false, 00:11:21.353 "nvme_io_md": false, 00:11:21.353 "write_zeroes": true, 00:11:21.353 "zcopy": true, 00:11:21.353 "get_zone_info": false, 00:11:21.353 "zone_management": false, 00:11:21.353 "zone_append": false, 00:11:21.353 "compare": false, 00:11:21.353 "compare_and_write": false, 00:11:21.353 "abort": true, 00:11:21.353 "seek_hole": false, 00:11:21.353 "seek_data": false, 00:11:21.353 "copy": true, 00:11:21.353 "nvme_iov_md": false 00:11:21.353 }, 00:11:21.353 "memory_domains": [ 00:11:21.353 { 00:11:21.353 "dma_device_id": "system", 00:11:21.353 "dma_device_type": 1 00:11:21.353 }, 00:11:21.353 { 00:11:21.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.353 "dma_device_type": 2 00:11:21.353 } 00:11:21.353 ], 00:11:21.353 "driver_specific": { 00:11:21.353 "passthru": { 00:11:21.353 "name": "pt2", 00:11:21.353 "base_bdev_name": "malloc2" 00:11:21.353 } 00:11:21.353 } 00:11:21.353 }' 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:21.353 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:21.611 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:21.611 "name": "pt3", 00:11:21.611 "aliases": [ 00:11:21.611 "00000000-0000-0000-0000-000000000003" 00:11:21.611 ], 00:11:21.611 "product_name": "passthru", 00:11:21.611 "block_size": 512, 00:11:21.611 "num_blocks": 65536, 00:11:21.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.611 "assigned_rate_limits": { 00:11:21.611 "rw_ios_per_sec": 0, 00:11:21.611 "rw_mbytes_per_sec": 0, 00:11:21.611 "r_mbytes_per_sec": 0, 00:11:21.611 "w_mbytes_per_sec": 0 00:11:21.611 }, 00:11:21.611 "claimed": true, 00:11:21.611 "claim_type": "exclusive_write", 00:11:21.611 "zoned": false, 00:11:21.611 "supported_io_types": { 00:11:21.611 "read": true, 00:11:21.611 "write": true, 00:11:21.611 "unmap": true, 00:11:21.611 "flush": true, 00:11:21.611 "reset": true, 00:11:21.611 "nvme_admin": false, 00:11:21.611 "nvme_io": false, 00:11:21.611 "nvme_io_md": false, 00:11:21.611 "write_zeroes": true, 00:11:21.611 "zcopy": true, 00:11:21.611 "get_zone_info": false, 00:11:21.611 "zone_management": false, 00:11:21.611 "zone_append": false, 00:11:21.611 "compare": false, 00:11:21.611 "compare_and_write": false, 00:11:21.611 "abort": true, 00:11:21.611 "seek_hole": false, 00:11:21.611 "seek_data": false, 00:11:21.611 "copy": true, 00:11:21.611 "nvme_iov_md": false 00:11:21.611 }, 00:11:21.611 "memory_domains": [ 00:11:21.611 { 00:11:21.611 "dma_device_id": "system", 00:11:21.611 "dma_device_type": 1 00:11:21.611 }, 00:11:21.611 { 00:11:21.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.611 "dma_device_type": 2 00:11:21.611 } 00:11:21.611 ], 00:11:21.611 "driver_specific": { 00:11:21.611 "passthru": { 00:11:21.611 "name": "pt3", 00:11:21.611 "base_bdev_name": "malloc3" 00:11:21.611 } 00:11:21.611 } 00:11:21.611 }' 00:11:21.611 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.611 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.611 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:21.611 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.611 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.611 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:21.611 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.869 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.869 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:21.869 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.869 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.869 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:21.869 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:21.869 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:22.126 [2024-07-15 18:25:14.330947] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.126 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=92d8568e-42d7-11ef-9ade-d5fc5159efa5 00:11:22.126 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 92d8568e-42d7-11ef-9ade-d5fc5159efa5 ']' 00:11:22.126 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:22.384 [2024-07-15 18:25:14.602819] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.384 [2024-07-15 18:25:14.602847] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.384 [2024-07-15 18:25:14.602872] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.384 [2024-07-15 18:25:14.602887] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.384 [2024-07-15 18:25:14.602892] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x168709035400 name raid_bdev1, state offline 00:11:22.384 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.384 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:22.642 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:22.642 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:22.642 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:22.642 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:22.900 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:22.900 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:23.158 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.158 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:23.417 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:23.417 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:23.675 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:23.675 [2024-07-15 18:25:16.026856] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:23.675 [2024-07-15 18:25:16.027502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:23.675 [2024-07-15 18:25:16.027523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:23.675 [2024-07-15 18:25:16.027541] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:23.675 [2024-07-15 18:25:16.027586] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:23.675 [2024-07-15 18:25:16.027599] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:23.675 [2024-07-15 18:25:16.027608] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.675 [2024-07-15 18:25:16.027613] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x168709035180 name raid_bdev1, state configuring 00:11:23.675 request: 00:11:23.675 { 00:11:23.675 "name": "raid_bdev1", 00:11:23.675 "raid_level": "concat", 00:11:23.675 "base_bdevs": [ 00:11:23.675 "malloc1", 00:11:23.675 "malloc2", 00:11:23.675 "malloc3" 00:11:23.675 ], 00:11:23.675 "strip_size_kb": 64, 00:11:23.675 "superblock": false, 00:11:23.675 "method": "bdev_raid_create", 00:11:23.675 "req_id": 1 00:11:23.675 } 00:11:23.675 Got JSON-RPC error response 00:11:23.675 response: 00:11:23.675 { 00:11:23.675 "code": -17, 00:11:23.675 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:23.675 } 00:11:23.675 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:23.675 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:23.675 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:23.675 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:23.675 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.675 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:24.242 [2024-07-15 18:25:16.550855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:24.242 [2024-07-15 18:25:16.550933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.242 [2024-07-15 18:25:16.550946] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168709034c80 00:11:24.242 [2024-07-15 18:25:16.550955] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.242 [2024-07-15 18:25:16.551649] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.242 [2024-07-15 18:25:16.551674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:24.242 [2024-07-15 18:25:16.551700] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:24.242 [2024-07-15 18:25:16.551723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:24.242 pt1 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.242 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.501 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:24.501 "name": "raid_bdev1", 00:11:24.501 "uuid": "92d8568e-42d7-11ef-9ade-d5fc5159efa5", 00:11:24.501 "strip_size_kb": 64, 00:11:24.501 "state": "configuring", 00:11:24.501 "raid_level": "concat", 00:11:24.501 "superblock": true, 00:11:24.501 "num_base_bdevs": 3, 00:11:24.501 "num_base_bdevs_discovered": 1, 00:11:24.501 "num_base_bdevs_operational": 3, 00:11:24.501 "base_bdevs_list": [ 00:11:24.501 { 00:11:24.501 "name": "pt1", 00:11:24.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.501 "is_configured": true, 00:11:24.501 "data_offset": 2048, 00:11:24.501 "data_size": 63488 00:11:24.501 }, 00:11:24.501 { 00:11:24.501 "name": null, 00:11:24.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.501 "is_configured": false, 00:11:24.501 "data_offset": 2048, 00:11:24.501 "data_size": 63488 00:11:24.501 }, 00:11:24.501 { 00:11:24.501 "name": null, 00:11:24.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.501 "is_configured": false, 00:11:24.501 "data_offset": 2048, 00:11:24.501 "data_size": 63488 00:11:24.501 } 00:11:24.501 ] 00:11:24.501 }' 00:11:24.501 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:24.501 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.760 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:11:24.760 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.018 [2024-07-15 18:25:17.322873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.018 [2024-07-15 18:25:17.322935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.018 [2024-07-15 18:25:17.322959] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168709035680 00:11:25.018 [2024-07-15 18:25:17.322967] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.018 [2024-07-15 18:25:17.323087] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.018 [2024-07-15 18:25:17.323099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.018 [2024-07-15 18:25:17.323123] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:25.018 [2024-07-15 18:25:17.323132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.018 pt2 00:11:25.018 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:25.277 [2024-07-15 18:25:17.558882] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:25.277 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:25.277 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:25.277 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:25.277 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:25.277 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:25.277 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.277 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.277 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.278 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.278 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.278 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.278 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.536 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.536 "name": "raid_bdev1", 00:11:25.536 "uuid": "92d8568e-42d7-11ef-9ade-d5fc5159efa5", 00:11:25.536 "strip_size_kb": 64, 00:11:25.536 "state": "configuring", 00:11:25.536 "raid_level": "concat", 00:11:25.536 "superblock": true, 00:11:25.536 "num_base_bdevs": 3, 00:11:25.536 "num_base_bdevs_discovered": 1, 00:11:25.536 "num_base_bdevs_operational": 3, 00:11:25.536 "base_bdevs_list": [ 00:11:25.536 { 00:11:25.536 "name": "pt1", 00:11:25.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.536 "is_configured": true, 00:11:25.536 "data_offset": 2048, 00:11:25.536 "data_size": 63488 00:11:25.536 }, 00:11:25.536 { 00:11:25.536 "name": null, 00:11:25.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.536 "is_configured": false, 00:11:25.536 "data_offset": 2048, 00:11:25.536 "data_size": 63488 00:11:25.536 }, 00:11:25.536 { 00:11:25.536 "name": null, 00:11:25.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.536 "is_configured": false, 00:11:25.536 "data_offset": 2048, 00:11:25.536 "data_size": 63488 00:11:25.536 } 00:11:25.536 ] 00:11:25.536 }' 00:11:25.536 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.536 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.103 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:26.103 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:26.103 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.103 [2024-07-15 18:25:18.402899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.103 [2024-07-15 18:25:18.402973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.103 [2024-07-15 18:25:18.402985] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168709035680 00:11:26.103 [2024-07-15 18:25:18.402994] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.103 [2024-07-15 18:25:18.403112] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.103 [2024-07-15 18:25:18.403124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.103 [2024-07-15 18:25:18.403147] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.103 [2024-07-15 18:25:18.403156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.103 pt2 00:11:26.103 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:26.103 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:26.103 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:26.362 [2024-07-15 18:25:18.686902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:26.362 [2024-07-15 18:25:18.686979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.362 [2024-07-15 18:25:18.686990] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168709035400 00:11:26.362 [2024-07-15 18:25:18.686998] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.362 [2024-07-15 18:25:18.687122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.362 [2024-07-15 18:25:18.687133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:26.362 [2024-07-15 18:25:18.687156] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:26.362 [2024-07-15 18:25:18.687165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:26.362 [2024-07-15 18:25:18.687203] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x168709034780 00:11:26.362 [2024-07-15 18:25:18.687208] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:26.362 [2024-07-15 18:25:18.687230] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x168709097e20 00:11:26.362 [2024-07-15 18:25:18.687290] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x168709034780 00:11:26.362 [2024-07-15 18:25:18.687295] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x168709034780 00:11:26.362 [2024-07-15 18:25:18.687317] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.362 pt3 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.362 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.621 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:26.621 "name": "raid_bdev1", 00:11:26.621 "uuid": "92d8568e-42d7-11ef-9ade-d5fc5159efa5", 00:11:26.621 "strip_size_kb": 64, 00:11:26.621 "state": "online", 00:11:26.621 "raid_level": "concat", 00:11:26.621 "superblock": true, 00:11:26.621 "num_base_bdevs": 3, 00:11:26.621 "num_base_bdevs_discovered": 3, 00:11:26.621 "num_base_bdevs_operational": 3, 00:11:26.621 "base_bdevs_list": [ 00:11:26.621 { 00:11:26.621 "name": "pt1", 00:11:26.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.621 "is_configured": true, 00:11:26.621 "data_offset": 2048, 00:11:26.621 "data_size": 63488 00:11:26.621 }, 00:11:26.621 { 00:11:26.621 "name": "pt2", 00:11:26.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.621 "is_configured": true, 00:11:26.621 "data_offset": 2048, 00:11:26.621 "data_size": 63488 00:11:26.621 }, 00:11:26.621 { 00:11:26.621 "name": "pt3", 00:11:26.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.621 "is_configured": true, 00:11:26.621 "data_offset": 2048, 00:11:26.621 "data_size": 63488 00:11:26.621 } 00:11:26.621 ] 00:11:26.621 }' 00:11:26.621 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:26.621 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.188 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:27.188 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:27.188 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:27.188 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:27.188 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:27.188 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:27.188 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:27.188 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:27.447 [2024-07-15 18:25:19.622994] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.447 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:27.447 "name": "raid_bdev1", 00:11:27.447 "aliases": [ 00:11:27.447 "92d8568e-42d7-11ef-9ade-d5fc5159efa5" 00:11:27.447 ], 00:11:27.447 "product_name": "Raid Volume", 00:11:27.447 "block_size": 512, 00:11:27.447 "num_blocks": 190464, 00:11:27.447 "uuid": "92d8568e-42d7-11ef-9ade-d5fc5159efa5", 00:11:27.447 "assigned_rate_limits": { 00:11:27.447 "rw_ios_per_sec": 0, 00:11:27.447 "rw_mbytes_per_sec": 0, 00:11:27.447 "r_mbytes_per_sec": 0, 00:11:27.447 "w_mbytes_per_sec": 0 00:11:27.447 }, 00:11:27.447 "claimed": false, 00:11:27.447 "zoned": false, 00:11:27.447 "supported_io_types": { 00:11:27.447 "read": true, 00:11:27.447 "write": true, 00:11:27.447 "unmap": true, 00:11:27.447 "flush": true, 00:11:27.447 "reset": true, 00:11:27.447 "nvme_admin": false, 00:11:27.447 "nvme_io": false, 00:11:27.447 "nvme_io_md": false, 00:11:27.447 "write_zeroes": true, 00:11:27.447 "zcopy": false, 00:11:27.447 "get_zone_info": false, 00:11:27.447 "zone_management": false, 00:11:27.447 "zone_append": false, 00:11:27.447 "compare": false, 00:11:27.447 "compare_and_write": false, 00:11:27.447 "abort": false, 00:11:27.447 "seek_hole": false, 00:11:27.447 "seek_data": false, 00:11:27.447 "copy": false, 00:11:27.447 "nvme_iov_md": false 00:11:27.447 }, 00:11:27.447 "memory_domains": [ 00:11:27.447 { 00:11:27.447 "dma_device_id": "system", 00:11:27.447 "dma_device_type": 1 00:11:27.447 }, 00:11:27.447 { 00:11:27.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.447 "dma_device_type": 2 00:11:27.447 }, 00:11:27.447 { 00:11:27.447 "dma_device_id": "system", 00:11:27.447 "dma_device_type": 1 00:11:27.447 }, 00:11:27.447 { 00:11:27.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.447 "dma_device_type": 2 00:11:27.447 }, 00:11:27.447 { 00:11:27.447 "dma_device_id": "system", 00:11:27.447 "dma_device_type": 1 00:11:27.447 }, 00:11:27.447 { 00:11:27.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.447 "dma_device_type": 2 00:11:27.447 } 00:11:27.447 ], 00:11:27.447 "driver_specific": { 00:11:27.447 "raid": { 00:11:27.447 "uuid": "92d8568e-42d7-11ef-9ade-d5fc5159efa5", 00:11:27.447 "strip_size_kb": 64, 00:11:27.447 "state": "online", 00:11:27.447 "raid_level": "concat", 00:11:27.447 "superblock": true, 00:11:27.447 "num_base_bdevs": 3, 00:11:27.447 "num_base_bdevs_discovered": 3, 00:11:27.447 "num_base_bdevs_operational": 3, 00:11:27.447 "base_bdevs_list": [ 00:11:27.447 { 00:11:27.447 "name": "pt1", 00:11:27.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.448 "is_configured": true, 00:11:27.448 "data_offset": 2048, 00:11:27.448 "data_size": 63488 00:11:27.448 }, 00:11:27.448 { 00:11:27.448 "name": "pt2", 00:11:27.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.448 "is_configured": true, 00:11:27.448 "data_offset": 2048, 00:11:27.448 "data_size": 63488 00:11:27.448 }, 00:11:27.448 { 00:11:27.448 "name": "pt3", 00:11:27.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.448 "is_configured": true, 00:11:27.448 "data_offset": 2048, 00:11:27.448 "data_size": 63488 00:11:27.448 } 00:11:27.448 ] 00:11:27.448 } 00:11:27.448 } 00:11:27.448 }' 00:11:27.448 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.448 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:27.448 pt2 00:11:27.448 pt3' 00:11:27.448 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:27.448 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:27.448 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:27.706 "name": "pt1", 00:11:27.706 "aliases": [ 00:11:27.706 "00000000-0000-0000-0000-000000000001" 00:11:27.706 ], 00:11:27.706 "product_name": "passthru", 00:11:27.706 "block_size": 512, 00:11:27.706 "num_blocks": 65536, 00:11:27.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.706 "assigned_rate_limits": { 00:11:27.706 "rw_ios_per_sec": 0, 00:11:27.706 "rw_mbytes_per_sec": 0, 00:11:27.706 "r_mbytes_per_sec": 0, 00:11:27.706 "w_mbytes_per_sec": 0 00:11:27.706 }, 00:11:27.706 "claimed": true, 00:11:27.706 "claim_type": "exclusive_write", 00:11:27.706 "zoned": false, 00:11:27.706 "supported_io_types": { 00:11:27.706 "read": true, 00:11:27.706 "write": true, 00:11:27.706 "unmap": true, 00:11:27.706 "flush": true, 00:11:27.706 "reset": true, 00:11:27.706 "nvme_admin": false, 00:11:27.706 "nvme_io": false, 00:11:27.706 "nvme_io_md": false, 00:11:27.706 "write_zeroes": true, 00:11:27.706 "zcopy": true, 00:11:27.706 "get_zone_info": false, 00:11:27.706 "zone_management": false, 00:11:27.706 "zone_append": false, 00:11:27.706 "compare": false, 00:11:27.706 "compare_and_write": false, 00:11:27.706 "abort": true, 00:11:27.706 "seek_hole": false, 00:11:27.706 "seek_data": false, 00:11:27.706 "copy": true, 00:11:27.706 "nvme_iov_md": false 00:11:27.706 }, 00:11:27.706 "memory_domains": [ 00:11:27.706 { 00:11:27.706 "dma_device_id": "system", 00:11:27.706 "dma_device_type": 1 00:11:27.706 }, 00:11:27.706 { 00:11:27.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.706 "dma_device_type": 2 00:11:27.706 } 00:11:27.706 ], 00:11:27.706 "driver_specific": { 00:11:27.706 "passthru": { 00:11:27.706 "name": "pt1", 00:11:27.706 "base_bdev_name": "malloc1" 00:11:27.706 } 00:11:27.706 } 00:11:27.706 }' 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:27.706 18:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:27.706 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:27.706 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:27.706 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:27.706 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:27.706 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:27.706 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:27.706 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:27.964 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:27.964 "name": "pt2", 00:11:27.964 "aliases": [ 00:11:27.964 "00000000-0000-0000-0000-000000000002" 00:11:27.964 ], 00:11:27.964 "product_name": "passthru", 00:11:27.964 "block_size": 512, 00:11:27.964 "num_blocks": 65536, 00:11:27.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.964 "assigned_rate_limits": { 00:11:27.964 "rw_ios_per_sec": 0, 00:11:27.964 "rw_mbytes_per_sec": 0, 00:11:27.964 "r_mbytes_per_sec": 0, 00:11:27.964 "w_mbytes_per_sec": 0 00:11:27.964 }, 00:11:27.964 "claimed": true, 00:11:27.964 "claim_type": "exclusive_write", 00:11:27.964 "zoned": false, 00:11:27.964 "supported_io_types": { 00:11:27.964 "read": true, 00:11:27.964 "write": true, 00:11:27.964 "unmap": true, 00:11:27.964 "flush": true, 00:11:27.964 "reset": true, 00:11:27.964 "nvme_admin": false, 00:11:27.964 "nvme_io": false, 00:11:27.964 "nvme_io_md": false, 00:11:27.964 "write_zeroes": true, 00:11:27.964 "zcopy": true, 00:11:27.964 "get_zone_info": false, 00:11:27.964 "zone_management": false, 00:11:27.964 "zone_append": false, 00:11:27.964 "compare": false, 00:11:27.964 "compare_and_write": false, 00:11:27.964 "abort": true, 00:11:27.964 "seek_hole": false, 00:11:27.964 "seek_data": false, 00:11:27.964 "copy": true, 00:11:27.964 "nvme_iov_md": false 00:11:27.964 }, 00:11:27.964 "memory_domains": [ 00:11:27.964 { 00:11:27.964 "dma_device_id": "system", 00:11:27.964 "dma_device_type": 1 00:11:27.964 }, 00:11:27.964 { 00:11:27.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.964 "dma_device_type": 2 00:11:27.964 } 00:11:27.964 ], 00:11:27.964 "driver_specific": { 00:11:27.964 "passthru": { 00:11:27.964 "name": "pt2", 00:11:27.964 "base_bdev_name": "malloc2" 00:11:27.964 } 00:11:27.964 } 00:11:27.964 }' 00:11:27.964 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:27.964 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:27.964 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:27.964 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:27.964 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:28.222 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:28.481 "name": "pt3", 00:11:28.481 "aliases": [ 00:11:28.481 "00000000-0000-0000-0000-000000000003" 00:11:28.481 ], 00:11:28.481 "product_name": "passthru", 00:11:28.481 "block_size": 512, 00:11:28.481 "num_blocks": 65536, 00:11:28.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.481 "assigned_rate_limits": { 00:11:28.481 "rw_ios_per_sec": 0, 00:11:28.481 "rw_mbytes_per_sec": 0, 00:11:28.481 "r_mbytes_per_sec": 0, 00:11:28.481 "w_mbytes_per_sec": 0 00:11:28.481 }, 00:11:28.481 "claimed": true, 00:11:28.481 "claim_type": "exclusive_write", 00:11:28.481 "zoned": false, 00:11:28.481 "supported_io_types": { 00:11:28.481 "read": true, 00:11:28.481 "write": true, 00:11:28.481 "unmap": true, 00:11:28.481 "flush": true, 00:11:28.481 "reset": true, 00:11:28.481 "nvme_admin": false, 00:11:28.481 "nvme_io": false, 00:11:28.481 "nvme_io_md": false, 00:11:28.481 "write_zeroes": true, 00:11:28.481 "zcopy": true, 00:11:28.481 "get_zone_info": false, 00:11:28.481 "zone_management": false, 00:11:28.481 "zone_append": false, 00:11:28.481 "compare": false, 00:11:28.481 "compare_and_write": false, 00:11:28.481 "abort": true, 00:11:28.481 "seek_hole": false, 00:11:28.481 "seek_data": false, 00:11:28.481 "copy": true, 00:11:28.481 "nvme_iov_md": false 00:11:28.481 }, 00:11:28.481 "memory_domains": [ 00:11:28.481 { 00:11:28.481 "dma_device_id": "system", 00:11:28.481 "dma_device_type": 1 00:11:28.481 }, 00:11:28.481 { 00:11:28.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.481 "dma_device_type": 2 00:11:28.481 } 00:11:28.481 ], 00:11:28.481 "driver_specific": { 00:11:28.481 "passthru": { 00:11:28.481 "name": "pt3", 00:11:28.481 "base_bdev_name": "malloc3" 00:11:28.481 } 00:11:28.481 } 00:11:28.481 }' 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:28.481 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:28.740 [2024-07-15 18:25:20.947012] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 92d8568e-42d7-11ef-9ade-d5fc5159efa5 '!=' 92d8568e-42d7-11ef-9ade-d5fc5159efa5 ']' 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 55505 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 55505 ']' 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 55505 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 55505 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:28.740 killing process with pid 55505 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55505' 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 55505 00:11:28.740 [2024-07-15 18:25:20.977871] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.740 [2024-07-15 18:25:20.977898] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.740 [2024-07-15 18:25:20.977920] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.740 [2024-07-15 18:25:20.977924] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x168709034780 name raid_bdev1, state offline 00:11:28.740 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 55505 00:11:28.740 [2024-07-15 18:25:21.001093] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.999 18:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:28.999 00:11:28.999 real 0m12.044s 00:11:28.999 user 0m21.220s 00:11:28.999 sys 0m2.012s 00:11:28.999 ************************************ 00:11:28.999 END TEST raid_superblock_test 00:11:28.999 ************************************ 00:11:28.999 18:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.999 18:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.999 18:25:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:28.999 18:25:21 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:28.999 18:25:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:28.999 18:25:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.999 18:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.999 ************************************ 00:11:28.999 START TEST raid_read_error_test 00:11:28.999 ************************************ 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.daeJiZzpFf 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55860 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55860 /var/tmp/spdk-raid.sock 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 55860 ']' 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.999 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.999 [2024-07-15 18:25:21.293058] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:11:28.999 [2024-07-15 18:25:21.293230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:29.566 EAL: TSC is not safe to use in SMP mode 00:11:29.566 EAL: TSC is not invariant 00:11:29.566 [2024-07-15 18:25:21.889410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.824 [2024-07-15 18:25:21.996256] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:29.824 [2024-07-15 18:25:21.998354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.824 [2024-07-15 18:25:21.999119] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.824 [2024-07-15 18:25:21.999133] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.084 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.084 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:30.084 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:30.084 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.342 BaseBdev1_malloc 00:11:30.343 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:30.600 true 00:11:30.600 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.858 [2024-07-15 18:25:23.103241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.858 [2024-07-15 18:25:23.103297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.858 [2024-07-15 18:25:23.103325] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22b07e434780 00:11:30.858 [2024-07-15 18:25:23.103334] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.858 [2024-07-15 18:25:23.104086] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.858 [2024-07-15 18:25:23.104111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.858 BaseBdev1 00:11:30.858 18:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:30.858 18:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:31.115 BaseBdev2_malloc 00:11:31.115 18:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:31.388 true 00:11:31.388 18:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:31.657 [2024-07-15 18:25:23.819259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:31.657 [2024-07-15 18:25:23.819313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.657 [2024-07-15 18:25:23.819340] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22b07e434c80 00:11:31.657 [2024-07-15 18:25:23.819349] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.657 [2024-07-15 18:25:23.820068] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.657 [2024-07-15 18:25:23.820088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:31.657 BaseBdev2 00:11:31.657 18:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:31.657 18:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:31.915 BaseBdev3_malloc 00:11:31.915 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:32.173 true 00:11:32.173 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:32.431 [2024-07-15 18:25:24.639287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:32.431 [2024-07-15 18:25:24.639340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.431 [2024-07-15 18:25:24.639365] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22b07e435180 00:11:32.431 [2024-07-15 18:25:24.639374] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.431 [2024-07-15 18:25:24.640107] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.431 [2024-07-15 18:25:24.640129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:32.431 BaseBdev3 00:11:32.431 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:32.694 [2024-07-15 18:25:24.923313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.694 [2024-07-15 18:25:24.923954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.694 [2024-07-15 18:25:24.923979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.694 [2024-07-15 18:25:24.924036] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22b07e435400 00:11:32.694 [2024-07-15 18:25:24.924042] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:32.694 [2024-07-15 18:25:24.924079] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22b07e4a0e20 00:11:32.694 [2024-07-15 18:25:24.924158] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22b07e435400 00:11:32.694 [2024-07-15 18:25:24.924163] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22b07e435400 00:11:32.694 [2024-07-15 18:25:24.924189] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.694 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.955 18:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.955 "name": "raid_bdev1", 00:11:32.955 "uuid": "9a7fa498-42d7-11ef-9ade-d5fc5159efa5", 00:11:32.955 "strip_size_kb": 64, 00:11:32.955 "state": "online", 00:11:32.955 "raid_level": "concat", 00:11:32.955 "superblock": true, 00:11:32.955 "num_base_bdevs": 3, 00:11:32.955 "num_base_bdevs_discovered": 3, 00:11:32.955 "num_base_bdevs_operational": 3, 00:11:32.955 "base_bdevs_list": [ 00:11:32.955 { 00:11:32.955 "name": "BaseBdev1", 00:11:32.955 "uuid": "b5db233b-7786-165c-9468-af575a5972be", 00:11:32.955 "is_configured": true, 00:11:32.955 "data_offset": 2048, 00:11:32.955 "data_size": 63488 00:11:32.955 }, 00:11:32.955 { 00:11:32.955 "name": "BaseBdev2", 00:11:32.955 "uuid": "0afb88f0-8fef-d158-885c-eef202e7d8c6", 00:11:32.955 "is_configured": true, 00:11:32.955 "data_offset": 2048, 00:11:32.955 "data_size": 63488 00:11:32.955 }, 00:11:32.955 { 00:11:32.955 "name": "BaseBdev3", 00:11:32.955 "uuid": "2782e2a1-6c9a-a758-bf1e-41ae9beba752", 00:11:32.955 "is_configured": true, 00:11:32.955 "data_offset": 2048, 00:11:32.955 "data_size": 63488 00:11:32.955 } 00:11:32.955 ] 00:11:32.955 }' 00:11:32.955 18:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.955 18:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.213 18:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:33.213 18:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:33.471 [2024-07-15 18:25:25.599543] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22b07e4a0ec0 00:11:34.405 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.664 18:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.922 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.922 "name": "raid_bdev1", 00:11:34.922 "uuid": "9a7fa498-42d7-11ef-9ade-d5fc5159efa5", 00:11:34.922 "strip_size_kb": 64, 00:11:34.922 "state": "online", 00:11:34.922 "raid_level": "concat", 00:11:34.922 "superblock": true, 00:11:34.922 "num_base_bdevs": 3, 00:11:34.922 "num_base_bdevs_discovered": 3, 00:11:34.922 "num_base_bdevs_operational": 3, 00:11:34.922 "base_bdevs_list": [ 00:11:34.922 { 00:11:34.922 "name": "BaseBdev1", 00:11:34.922 "uuid": "b5db233b-7786-165c-9468-af575a5972be", 00:11:34.922 "is_configured": true, 00:11:34.922 "data_offset": 2048, 00:11:34.922 "data_size": 63488 00:11:34.922 }, 00:11:34.922 { 00:11:34.922 "name": "BaseBdev2", 00:11:34.922 "uuid": "0afb88f0-8fef-d158-885c-eef202e7d8c6", 00:11:34.922 "is_configured": true, 00:11:34.922 "data_offset": 2048, 00:11:34.922 "data_size": 63488 00:11:34.922 }, 00:11:34.922 { 00:11:34.922 "name": "BaseBdev3", 00:11:34.922 "uuid": "2782e2a1-6c9a-a758-bf1e-41ae9beba752", 00:11:34.922 "is_configured": true, 00:11:34.922 "data_offset": 2048, 00:11:34.922 "data_size": 63488 00:11:34.923 } 00:11:34.923 ] 00:11:34.923 }' 00:11:34.923 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.923 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.181 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:35.440 [2024-07-15 18:25:27.699164] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.440 [2024-07-15 18:25:27.699200] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.440 [2024-07-15 18:25:27.699539] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.440 [2024-07-15 18:25:27.699550] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.440 [2024-07-15 18:25:27.699557] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.440 [2024-07-15 18:25:27.699561] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22b07e435400 name raid_bdev1, state offline 00:11:35.440 0 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55860 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 55860 ']' 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 55860 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55860 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:35.440 killing process with pid 55860 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55860' 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 55860 00:11:35.440 [2024-07-15 18:25:27.733958] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.440 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 55860 00:11:35.440 [2024-07-15 18:25:27.756074] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.daeJiZzpFf 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:11:35.698 00:11:35.698 real 0m6.710s 00:11:35.698 user 0m10.390s 00:11:35.698 sys 0m1.216s 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.698 ************************************ 00:11:35.698 END TEST raid_read_error_test 00:11:35.698 ************************************ 00:11:35.698 18:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.698 18:25:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:35.698 18:25:28 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:35.698 18:25:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:35.698 18:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.698 18:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.699 ************************************ 00:11:35.699 START TEST raid_write_error_test 00:11:35.699 ************************************ 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.vO5junIWm1 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55991 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55991 /var/tmp/spdk-raid.sock 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 55991 ']' 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.699 18:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.699 [2024-07-15 18:25:28.050985] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:11:35.699 [2024-07-15 18:25:28.051281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:36.634 EAL: TSC is not safe to use in SMP mode 00:11:36.634 EAL: TSC is not invariant 00:11:36.634 [2024-07-15 18:25:28.662050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.634 [2024-07-15 18:25:28.773087] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:36.634 [2024-07-15 18:25:28.775249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.634 [2024-07-15 18:25:28.776081] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.634 [2024-07-15 18:25:28.776096] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.893 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.893 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:36.893 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:36.893 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:37.151 BaseBdev1_malloc 00:11:37.151 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:37.409 true 00:11:37.409 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:37.668 [2024-07-15 18:25:29.928029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:37.668 [2024-07-15 18:25:29.928128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.668 [2024-07-15 18:25:29.928156] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x29038434780 00:11:37.668 [2024-07-15 18:25:29.928165] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.668 [2024-07-15 18:25:29.928878] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.668 [2024-07-15 18:25:29.928906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:37.668 BaseBdev1 00:11:37.668 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:37.668 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:37.926 BaseBdev2_malloc 00:11:37.926 18:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:38.185 true 00:11:38.185 18:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:38.444 [2024-07-15 18:25:30.764062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:38.444 [2024-07-15 18:25:30.764169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.444 [2024-07-15 18:25:30.764195] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x29038434c80 00:11:38.444 [2024-07-15 18:25:30.764203] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.444 [2024-07-15 18:25:30.764907] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.444 [2024-07-15 18:25:30.764934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:38.444 BaseBdev2 00:11:38.444 18:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:38.444 18:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:38.703 BaseBdev3_malloc 00:11:38.703 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:38.962 true 00:11:38.962 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:39.221 [2024-07-15 18:25:31.516119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:39.221 [2024-07-15 18:25:31.516192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.221 [2024-07-15 18:25:31.516221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x29038435180 00:11:39.221 [2024-07-15 18:25:31.516230] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.221 [2024-07-15 18:25:31.516924] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.221 [2024-07-15 18:25:31.516952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:39.221 BaseBdev3 00:11:39.221 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:39.477 [2024-07-15 18:25:31.836133] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.477 [2024-07-15 18:25:31.836736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.477 [2024-07-15 18:25:31.836763] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.477 [2024-07-15 18:25:31.836826] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x29038435400 00:11:39.477 [2024-07-15 18:25:31.836832] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:39.477 [2024-07-15 18:25:31.836877] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x290384a0e20 00:11:39.477 [2024-07-15 18:25:31.836951] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x29038435400 00:11:39.477 [2024-07-15 18:25:31.836956] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x29038435400 00:11:39.477 [2024-07-15 18:25:31.836984] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.477 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.733 18:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.733 18:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.733 "name": "raid_bdev1", 00:11:39.733 "uuid": "9e9e7483-42d7-11ef-9ade-d5fc5159efa5", 00:11:39.733 "strip_size_kb": 64, 00:11:39.733 "state": "online", 00:11:39.733 "raid_level": "concat", 00:11:39.733 "superblock": true, 00:11:39.733 "num_base_bdevs": 3, 00:11:39.733 "num_base_bdevs_discovered": 3, 00:11:39.733 "num_base_bdevs_operational": 3, 00:11:39.733 "base_bdevs_list": [ 00:11:39.733 { 00:11:39.733 "name": "BaseBdev1", 00:11:39.733 "uuid": "beb19e93-c6b3-575c-a920-1d3bc8cfdf6f", 00:11:39.733 "is_configured": true, 00:11:39.733 "data_offset": 2048, 00:11:39.733 "data_size": 63488 00:11:39.733 }, 00:11:39.733 { 00:11:39.733 "name": "BaseBdev2", 00:11:39.733 "uuid": "94cba2f0-88a3-e151-a3b5-ed3e0548e848", 00:11:39.733 "is_configured": true, 00:11:39.733 "data_offset": 2048, 00:11:39.733 "data_size": 63488 00:11:39.733 }, 00:11:39.733 { 00:11:39.733 "name": "BaseBdev3", 00:11:39.733 "uuid": "69a4f5d5-a711-555a-890a-cb1926c475a2", 00:11:39.733 "is_configured": true, 00:11:39.733 "data_offset": 2048, 00:11:39.733 "data_size": 63488 00:11:39.733 } 00:11:39.733 ] 00:11:39.733 }' 00:11:39.733 18:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.733 18:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.299 18:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:40.299 18:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:40.299 [2024-07-15 18:25:32.516355] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x290384a0ec0 00:11:41.233 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.493 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.751 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.751 "name": "raid_bdev1", 00:11:41.751 "uuid": "9e9e7483-42d7-11ef-9ade-d5fc5159efa5", 00:11:41.751 "strip_size_kb": 64, 00:11:41.751 "state": "online", 00:11:41.751 "raid_level": "concat", 00:11:41.751 "superblock": true, 00:11:41.751 "num_base_bdevs": 3, 00:11:41.751 "num_base_bdevs_discovered": 3, 00:11:41.751 "num_base_bdevs_operational": 3, 00:11:41.751 "base_bdevs_list": [ 00:11:41.751 { 00:11:41.751 "name": "BaseBdev1", 00:11:41.751 "uuid": "beb19e93-c6b3-575c-a920-1d3bc8cfdf6f", 00:11:41.751 "is_configured": true, 00:11:41.751 "data_offset": 2048, 00:11:41.751 "data_size": 63488 00:11:41.751 }, 00:11:41.751 { 00:11:41.751 "name": "BaseBdev2", 00:11:41.751 "uuid": "94cba2f0-88a3-e151-a3b5-ed3e0548e848", 00:11:41.751 "is_configured": true, 00:11:41.751 "data_offset": 2048, 00:11:41.751 "data_size": 63488 00:11:41.751 }, 00:11:41.751 { 00:11:41.751 "name": "BaseBdev3", 00:11:41.751 "uuid": "69a4f5d5-a711-555a-890a-cb1926c475a2", 00:11:41.752 "is_configured": true, 00:11:41.752 "data_offset": 2048, 00:11:41.752 "data_size": 63488 00:11:41.752 } 00:11:41.752 ] 00:11:41.752 }' 00:11:41.752 18:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.752 18:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.010 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:42.267 [2024-07-15 18:25:34.575449] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.267 [2024-07-15 18:25:34.575475] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.267 [2024-07-15 18:25:34.575813] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.267 [2024-07-15 18:25:34.575823] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.267 [2024-07-15 18:25:34.575830] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.267 [2024-07-15 18:25:34.575834] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x29038435400 name raid_bdev1, state offline 00:11:42.267 0 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55991 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 55991 ']' 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 55991 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55991 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:42.267 killing process with pid 55991 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55991' 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 55991 00:11:42.267 [2024-07-15 18:25:34.601639] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.267 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 55991 00:11:42.267 [2024-07-15 18:25:34.623438] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.vO5junIWm1 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:11:42.525 00:11:42.525 real 0m6.808s 00:11:42.525 user 0m10.706s 00:11:42.525 sys 0m1.146s 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.525 18:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.525 ************************************ 00:11:42.525 END TEST raid_write_error_test 00:11:42.525 ************************************ 00:11:42.525 18:25:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:42.525 18:25:34 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:42.525 18:25:34 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:42.525 18:25:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:42.525 18:25:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.525 18:25:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.525 ************************************ 00:11:42.525 START TEST raid_state_function_test 00:11:42.525 ************************************ 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=56124 00:11:42.525 Process raid pid: 56124 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56124' 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 56124 /var/tmp/spdk-raid.sock 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 56124 ']' 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.525 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.525 [2024-07-15 18:25:34.897735] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:11:42.525 [2024-07-15 18:25:34.897945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:43.460 EAL: TSC is not safe to use in SMP mode 00:11:43.460 EAL: TSC is not invariant 00:11:43.460 [2024-07-15 18:25:35.491086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.460 [2024-07-15 18:25:35.599589] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:43.460 [2024-07-15 18:25:35.601804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.460 [2024-07-15 18:25:35.602552] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.460 [2024-07-15 18:25:35.602567] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.719 18:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.719 18:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:43.719 18:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:43.978 [2024-07-15 18:25:36.162536] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.978 [2024-07-15 18:25:36.162587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.978 [2024-07-15 18:25:36.162592] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.978 [2024-07-15 18:25:36.162601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.978 [2024-07-15 18:25:36.162604] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.978 [2024-07-15 18:25:36.162612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.978 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.236 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.236 "name": "Existed_Raid", 00:11:44.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.236 "strip_size_kb": 0, 00:11:44.236 "state": "configuring", 00:11:44.236 "raid_level": "raid1", 00:11:44.236 "superblock": false, 00:11:44.236 "num_base_bdevs": 3, 00:11:44.236 "num_base_bdevs_discovered": 0, 00:11:44.236 "num_base_bdevs_operational": 3, 00:11:44.236 "base_bdevs_list": [ 00:11:44.236 { 00:11:44.236 "name": "BaseBdev1", 00:11:44.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.236 "is_configured": false, 00:11:44.236 "data_offset": 0, 00:11:44.236 "data_size": 0 00:11:44.236 }, 00:11:44.236 { 00:11:44.236 "name": "BaseBdev2", 00:11:44.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.236 "is_configured": false, 00:11:44.236 "data_offset": 0, 00:11:44.236 "data_size": 0 00:11:44.236 }, 00:11:44.236 { 00:11:44.236 "name": "BaseBdev3", 00:11:44.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.236 "is_configured": false, 00:11:44.236 "data_offset": 0, 00:11:44.236 "data_size": 0 00:11:44.236 } 00:11:44.236 ] 00:11:44.236 }' 00:11:44.236 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.236 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.495 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:44.755 [2024-07-15 18:25:37.086569] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.755 [2024-07-15 18:25:37.086596] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378cbd034500 name Existed_Raid, state configuring 00:11:44.755 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:45.012 [2024-07-15 18:25:37.326579] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.012 [2024-07-15 18:25:37.326630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.012 [2024-07-15 18:25:37.326635] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.012 [2024-07-15 18:25:37.326644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.012 [2024-07-15 18:25:37.326647] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.012 [2024-07-15 18:25:37.326655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.012 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.270 [2024-07-15 18:25:37.563639] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.270 BaseBdev1 00:11:45.270 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:45.270 18:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:45.270 18:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:45.270 18:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:45.270 18:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:45.270 18:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:45.270 18:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:45.528 18:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.790 [ 00:11:45.790 { 00:11:45.790 "name": "BaseBdev1", 00:11:45.790 "aliases": [ 00:11:45.790 "a2083f4d-42d7-11ef-9ade-d5fc5159efa5" 00:11:45.790 ], 00:11:45.790 "product_name": "Malloc disk", 00:11:45.790 "block_size": 512, 00:11:45.790 "num_blocks": 65536, 00:11:45.790 "uuid": "a2083f4d-42d7-11ef-9ade-d5fc5159efa5", 00:11:45.790 "assigned_rate_limits": { 00:11:45.790 "rw_ios_per_sec": 0, 00:11:45.790 "rw_mbytes_per_sec": 0, 00:11:45.790 "r_mbytes_per_sec": 0, 00:11:45.790 "w_mbytes_per_sec": 0 00:11:45.790 }, 00:11:45.790 "claimed": true, 00:11:45.790 "claim_type": "exclusive_write", 00:11:45.790 "zoned": false, 00:11:45.790 "supported_io_types": { 00:11:45.790 "read": true, 00:11:45.790 "write": true, 00:11:45.790 "unmap": true, 00:11:45.790 "flush": true, 00:11:45.790 "reset": true, 00:11:45.790 "nvme_admin": false, 00:11:45.790 "nvme_io": false, 00:11:45.790 "nvme_io_md": false, 00:11:45.790 "write_zeroes": true, 00:11:45.790 "zcopy": true, 00:11:45.791 "get_zone_info": false, 00:11:45.791 "zone_management": false, 00:11:45.791 "zone_append": false, 00:11:45.791 "compare": false, 00:11:45.791 "compare_and_write": false, 00:11:45.791 "abort": true, 00:11:45.791 "seek_hole": false, 00:11:45.791 "seek_data": false, 00:11:45.791 "copy": true, 00:11:45.791 "nvme_iov_md": false 00:11:45.791 }, 00:11:45.791 "memory_domains": [ 00:11:45.791 { 00:11:45.791 "dma_device_id": "system", 00:11:45.791 "dma_device_type": 1 00:11:45.791 }, 00:11:45.791 { 00:11:45.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.791 "dma_device_type": 2 00:11:45.791 } 00:11:45.791 ], 00:11:45.791 "driver_specific": {} 00:11:45.791 } 00:11:45.791 ] 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.791 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.071 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.071 "name": "Existed_Raid", 00:11:46.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.071 "strip_size_kb": 0, 00:11:46.071 "state": "configuring", 00:11:46.071 "raid_level": "raid1", 00:11:46.071 "superblock": false, 00:11:46.071 "num_base_bdevs": 3, 00:11:46.071 "num_base_bdevs_discovered": 1, 00:11:46.071 "num_base_bdevs_operational": 3, 00:11:46.071 "base_bdevs_list": [ 00:11:46.071 { 00:11:46.071 "name": "BaseBdev1", 00:11:46.071 "uuid": "a2083f4d-42d7-11ef-9ade-d5fc5159efa5", 00:11:46.071 "is_configured": true, 00:11:46.071 "data_offset": 0, 00:11:46.071 "data_size": 65536 00:11:46.071 }, 00:11:46.071 { 00:11:46.071 "name": "BaseBdev2", 00:11:46.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.071 "is_configured": false, 00:11:46.071 "data_offset": 0, 00:11:46.071 "data_size": 0 00:11:46.071 }, 00:11:46.071 { 00:11:46.071 "name": "BaseBdev3", 00:11:46.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.071 "is_configured": false, 00:11:46.071 "data_offset": 0, 00:11:46.071 "data_size": 0 00:11:46.071 } 00:11:46.071 ] 00:11:46.071 }' 00:11:46.071 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.071 18:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.328 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:46.589 [2024-07-15 18:25:38.890655] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.589 [2024-07-15 18:25:38.890691] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378cbd034500 name Existed_Raid, state configuring 00:11:46.589 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:46.848 [2024-07-15 18:25:39.166680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.848 [2024-07-15 18:25:39.167578] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.848 [2024-07-15 18:25:39.167621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.848 [2024-07-15 18:25:39.167626] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:46.848 [2024-07-15 18:25:39.167635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.848 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.107 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.107 "name": "Existed_Raid", 00:11:47.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.107 "strip_size_kb": 0, 00:11:47.107 "state": "configuring", 00:11:47.107 "raid_level": "raid1", 00:11:47.107 "superblock": false, 00:11:47.107 "num_base_bdevs": 3, 00:11:47.107 "num_base_bdevs_discovered": 1, 00:11:47.107 "num_base_bdevs_operational": 3, 00:11:47.107 "base_bdevs_list": [ 00:11:47.107 { 00:11:47.107 "name": "BaseBdev1", 00:11:47.107 "uuid": "a2083f4d-42d7-11ef-9ade-d5fc5159efa5", 00:11:47.107 "is_configured": true, 00:11:47.107 "data_offset": 0, 00:11:47.107 "data_size": 65536 00:11:47.107 }, 00:11:47.107 { 00:11:47.107 "name": "BaseBdev2", 00:11:47.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.107 "is_configured": false, 00:11:47.107 "data_offset": 0, 00:11:47.107 "data_size": 0 00:11:47.107 }, 00:11:47.107 { 00:11:47.107 "name": "BaseBdev3", 00:11:47.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.107 "is_configured": false, 00:11:47.107 "data_offset": 0, 00:11:47.107 "data_size": 0 00:11:47.107 } 00:11:47.107 ] 00:11:47.107 }' 00:11:47.107 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.107 18:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.364 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:47.622 [2024-07-15 18:25:40.002874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.880 BaseBdev2 00:11:47.880 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:47.880 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:47.880 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:47.880 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:47.880 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:47.880 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:47.880 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:48.137 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:48.395 [ 00:11:48.395 { 00:11:48.395 "name": "BaseBdev2", 00:11:48.395 "aliases": [ 00:11:48.395 "a37c950f-42d7-11ef-9ade-d5fc5159efa5" 00:11:48.395 ], 00:11:48.395 "product_name": "Malloc disk", 00:11:48.395 "block_size": 512, 00:11:48.395 "num_blocks": 65536, 00:11:48.395 "uuid": "a37c950f-42d7-11ef-9ade-d5fc5159efa5", 00:11:48.395 "assigned_rate_limits": { 00:11:48.395 "rw_ios_per_sec": 0, 00:11:48.395 "rw_mbytes_per_sec": 0, 00:11:48.395 "r_mbytes_per_sec": 0, 00:11:48.395 "w_mbytes_per_sec": 0 00:11:48.395 }, 00:11:48.395 "claimed": true, 00:11:48.395 "claim_type": "exclusive_write", 00:11:48.395 "zoned": false, 00:11:48.395 "supported_io_types": { 00:11:48.395 "read": true, 00:11:48.395 "write": true, 00:11:48.395 "unmap": true, 00:11:48.395 "flush": true, 00:11:48.395 "reset": true, 00:11:48.395 "nvme_admin": false, 00:11:48.395 "nvme_io": false, 00:11:48.395 "nvme_io_md": false, 00:11:48.395 "write_zeroes": true, 00:11:48.395 "zcopy": true, 00:11:48.395 "get_zone_info": false, 00:11:48.395 "zone_management": false, 00:11:48.395 "zone_append": false, 00:11:48.395 "compare": false, 00:11:48.395 "compare_and_write": false, 00:11:48.395 "abort": true, 00:11:48.395 "seek_hole": false, 00:11:48.395 "seek_data": false, 00:11:48.395 "copy": true, 00:11:48.395 "nvme_iov_md": false 00:11:48.395 }, 00:11:48.395 "memory_domains": [ 00:11:48.395 { 00:11:48.395 "dma_device_id": "system", 00:11:48.395 "dma_device_type": 1 00:11:48.395 }, 00:11:48.395 { 00:11:48.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.395 "dma_device_type": 2 00:11:48.395 } 00:11:48.395 ], 00:11:48.395 "driver_specific": {} 00:11:48.395 } 00:11:48.395 ] 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.395 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.652 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.653 "name": "Existed_Raid", 00:11:48.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.653 "strip_size_kb": 0, 00:11:48.653 "state": "configuring", 00:11:48.653 "raid_level": "raid1", 00:11:48.653 "superblock": false, 00:11:48.653 "num_base_bdevs": 3, 00:11:48.653 "num_base_bdevs_discovered": 2, 00:11:48.653 "num_base_bdevs_operational": 3, 00:11:48.653 "base_bdevs_list": [ 00:11:48.653 { 00:11:48.653 "name": "BaseBdev1", 00:11:48.653 "uuid": "a2083f4d-42d7-11ef-9ade-d5fc5159efa5", 00:11:48.653 "is_configured": true, 00:11:48.653 "data_offset": 0, 00:11:48.653 "data_size": 65536 00:11:48.653 }, 00:11:48.653 { 00:11:48.653 "name": "BaseBdev2", 00:11:48.653 "uuid": "a37c950f-42d7-11ef-9ade-d5fc5159efa5", 00:11:48.653 "is_configured": true, 00:11:48.653 "data_offset": 0, 00:11:48.653 "data_size": 65536 00:11:48.653 }, 00:11:48.653 { 00:11:48.653 "name": "BaseBdev3", 00:11:48.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.653 "is_configured": false, 00:11:48.653 "data_offset": 0, 00:11:48.653 "data_size": 0 00:11:48.653 } 00:11:48.653 ] 00:11:48.653 }' 00:11:48.653 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.653 18:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.940 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.197 [2024-07-15 18:25:41.366953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.197 [2024-07-15 18:25:41.366985] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x378cbd034a00 00:11:49.197 [2024-07-15 18:25:41.366990] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:49.197 [2024-07-15 18:25:41.367012] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x378cbd097e20 00:11:49.197 [2024-07-15 18:25:41.367112] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x378cbd034a00 00:11:49.197 [2024-07-15 18:25:41.367117] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x378cbd034a00 00:11:49.197 [2024-07-15 18:25:41.367150] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.197 BaseBdev3 00:11:49.197 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:49.197 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:49.197 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:49.197 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:49.197 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:49.197 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:49.197 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:49.453 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.709 [ 00:11:49.709 { 00:11:49.709 "name": "BaseBdev3", 00:11:49.709 "aliases": [ 00:11:49.709 "a44cb965-42d7-11ef-9ade-d5fc5159efa5" 00:11:49.709 ], 00:11:49.709 "product_name": "Malloc disk", 00:11:49.709 "block_size": 512, 00:11:49.709 "num_blocks": 65536, 00:11:49.709 "uuid": "a44cb965-42d7-11ef-9ade-d5fc5159efa5", 00:11:49.709 "assigned_rate_limits": { 00:11:49.709 "rw_ios_per_sec": 0, 00:11:49.709 "rw_mbytes_per_sec": 0, 00:11:49.709 "r_mbytes_per_sec": 0, 00:11:49.709 "w_mbytes_per_sec": 0 00:11:49.709 }, 00:11:49.709 "claimed": true, 00:11:49.709 "claim_type": "exclusive_write", 00:11:49.709 "zoned": false, 00:11:49.709 "supported_io_types": { 00:11:49.709 "read": true, 00:11:49.709 "write": true, 00:11:49.709 "unmap": true, 00:11:49.709 "flush": true, 00:11:49.709 "reset": true, 00:11:49.709 "nvme_admin": false, 00:11:49.709 "nvme_io": false, 00:11:49.709 "nvme_io_md": false, 00:11:49.709 "write_zeroes": true, 00:11:49.709 "zcopy": true, 00:11:49.709 "get_zone_info": false, 00:11:49.709 "zone_management": false, 00:11:49.709 "zone_append": false, 00:11:49.709 "compare": false, 00:11:49.709 "compare_and_write": false, 00:11:49.709 "abort": true, 00:11:49.709 "seek_hole": false, 00:11:49.709 "seek_data": false, 00:11:49.709 "copy": true, 00:11:49.709 "nvme_iov_md": false 00:11:49.709 }, 00:11:49.709 "memory_domains": [ 00:11:49.709 { 00:11:49.709 "dma_device_id": "system", 00:11:49.709 "dma_device_type": 1 00:11:49.709 }, 00:11:49.709 { 00:11:49.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.710 "dma_device_type": 2 00:11:49.710 } 00:11:49.710 ], 00:11:49.710 "driver_specific": {} 00:11:49.710 } 00:11:49.710 ] 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.710 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.966 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.966 "name": "Existed_Raid", 00:11:49.966 "uuid": "a44cc040-42d7-11ef-9ade-d5fc5159efa5", 00:11:49.966 "strip_size_kb": 0, 00:11:49.966 "state": "online", 00:11:49.966 "raid_level": "raid1", 00:11:49.966 "superblock": false, 00:11:49.966 "num_base_bdevs": 3, 00:11:49.966 "num_base_bdevs_discovered": 3, 00:11:49.966 "num_base_bdevs_operational": 3, 00:11:49.966 "base_bdevs_list": [ 00:11:49.966 { 00:11:49.966 "name": "BaseBdev1", 00:11:49.966 "uuid": "a2083f4d-42d7-11ef-9ade-d5fc5159efa5", 00:11:49.966 "is_configured": true, 00:11:49.966 "data_offset": 0, 00:11:49.966 "data_size": 65536 00:11:49.966 }, 00:11:49.966 { 00:11:49.966 "name": "BaseBdev2", 00:11:49.966 "uuid": "a37c950f-42d7-11ef-9ade-d5fc5159efa5", 00:11:49.966 "is_configured": true, 00:11:49.966 "data_offset": 0, 00:11:49.966 "data_size": 65536 00:11:49.966 }, 00:11:49.966 { 00:11:49.966 "name": "BaseBdev3", 00:11:49.966 "uuid": "a44cb965-42d7-11ef-9ade-d5fc5159efa5", 00:11:49.966 "is_configured": true, 00:11:49.966 "data_offset": 0, 00:11:49.966 "data_size": 65536 00:11:49.966 } 00:11:49.966 ] 00:11:49.966 }' 00:11:49.966 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.966 18:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.223 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.223 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:50.223 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:50.223 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:50.223 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:50.223 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:50.223 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:50.223 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:50.481 [2024-07-15 18:25:42.706914] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.481 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:50.481 "name": "Existed_Raid", 00:11:50.481 "aliases": [ 00:11:50.481 "a44cc040-42d7-11ef-9ade-d5fc5159efa5" 00:11:50.481 ], 00:11:50.481 "product_name": "Raid Volume", 00:11:50.481 "block_size": 512, 00:11:50.481 "num_blocks": 65536, 00:11:50.481 "uuid": "a44cc040-42d7-11ef-9ade-d5fc5159efa5", 00:11:50.481 "assigned_rate_limits": { 00:11:50.481 "rw_ios_per_sec": 0, 00:11:50.481 "rw_mbytes_per_sec": 0, 00:11:50.481 "r_mbytes_per_sec": 0, 00:11:50.481 "w_mbytes_per_sec": 0 00:11:50.481 }, 00:11:50.481 "claimed": false, 00:11:50.481 "zoned": false, 00:11:50.481 "supported_io_types": { 00:11:50.481 "read": true, 00:11:50.481 "write": true, 00:11:50.481 "unmap": false, 00:11:50.481 "flush": false, 00:11:50.481 "reset": true, 00:11:50.481 "nvme_admin": false, 00:11:50.481 "nvme_io": false, 00:11:50.481 "nvme_io_md": false, 00:11:50.481 "write_zeroes": true, 00:11:50.481 "zcopy": false, 00:11:50.481 "get_zone_info": false, 00:11:50.481 "zone_management": false, 00:11:50.481 "zone_append": false, 00:11:50.481 "compare": false, 00:11:50.481 "compare_and_write": false, 00:11:50.481 "abort": false, 00:11:50.481 "seek_hole": false, 00:11:50.481 "seek_data": false, 00:11:50.481 "copy": false, 00:11:50.481 "nvme_iov_md": false 00:11:50.481 }, 00:11:50.481 "memory_domains": [ 00:11:50.481 { 00:11:50.481 "dma_device_id": "system", 00:11:50.481 "dma_device_type": 1 00:11:50.481 }, 00:11:50.481 { 00:11:50.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.481 "dma_device_type": 2 00:11:50.481 }, 00:11:50.481 { 00:11:50.481 "dma_device_id": "system", 00:11:50.481 "dma_device_type": 1 00:11:50.481 }, 00:11:50.481 { 00:11:50.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.481 "dma_device_type": 2 00:11:50.481 }, 00:11:50.481 { 00:11:50.481 "dma_device_id": "system", 00:11:50.481 "dma_device_type": 1 00:11:50.481 }, 00:11:50.481 { 00:11:50.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.481 "dma_device_type": 2 00:11:50.481 } 00:11:50.481 ], 00:11:50.481 "driver_specific": { 00:11:50.481 "raid": { 00:11:50.481 "uuid": "a44cc040-42d7-11ef-9ade-d5fc5159efa5", 00:11:50.481 "strip_size_kb": 0, 00:11:50.481 "state": "online", 00:11:50.481 "raid_level": "raid1", 00:11:50.481 "superblock": false, 00:11:50.481 "num_base_bdevs": 3, 00:11:50.481 "num_base_bdevs_discovered": 3, 00:11:50.481 "num_base_bdevs_operational": 3, 00:11:50.481 "base_bdevs_list": [ 00:11:50.481 { 00:11:50.481 "name": "BaseBdev1", 00:11:50.481 "uuid": "a2083f4d-42d7-11ef-9ade-d5fc5159efa5", 00:11:50.481 "is_configured": true, 00:11:50.481 "data_offset": 0, 00:11:50.481 "data_size": 65536 00:11:50.481 }, 00:11:50.481 { 00:11:50.481 "name": "BaseBdev2", 00:11:50.481 "uuid": "a37c950f-42d7-11ef-9ade-d5fc5159efa5", 00:11:50.481 "is_configured": true, 00:11:50.481 "data_offset": 0, 00:11:50.481 "data_size": 65536 00:11:50.481 }, 00:11:50.481 { 00:11:50.481 "name": "BaseBdev3", 00:11:50.481 "uuid": "a44cb965-42d7-11ef-9ade-d5fc5159efa5", 00:11:50.481 "is_configured": true, 00:11:50.481 "data_offset": 0, 00:11:50.481 "data_size": 65536 00:11:50.481 } 00:11:50.481 ] 00:11:50.481 } 00:11:50.481 } 00:11:50.481 }' 00:11:50.481 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.481 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:50.481 BaseBdev2 00:11:50.481 BaseBdev3' 00:11:50.481 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:50.481 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:50.481 18:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:50.739 "name": "BaseBdev1", 00:11:50.739 "aliases": [ 00:11:50.739 "a2083f4d-42d7-11ef-9ade-d5fc5159efa5" 00:11:50.739 ], 00:11:50.739 "product_name": "Malloc disk", 00:11:50.739 "block_size": 512, 00:11:50.739 "num_blocks": 65536, 00:11:50.739 "uuid": "a2083f4d-42d7-11ef-9ade-d5fc5159efa5", 00:11:50.739 "assigned_rate_limits": { 00:11:50.739 "rw_ios_per_sec": 0, 00:11:50.739 "rw_mbytes_per_sec": 0, 00:11:50.739 "r_mbytes_per_sec": 0, 00:11:50.739 "w_mbytes_per_sec": 0 00:11:50.739 }, 00:11:50.739 "claimed": true, 00:11:50.739 "claim_type": "exclusive_write", 00:11:50.739 "zoned": false, 00:11:50.739 "supported_io_types": { 00:11:50.739 "read": true, 00:11:50.739 "write": true, 00:11:50.739 "unmap": true, 00:11:50.739 "flush": true, 00:11:50.739 "reset": true, 00:11:50.739 "nvme_admin": false, 00:11:50.739 "nvme_io": false, 00:11:50.739 "nvme_io_md": false, 00:11:50.739 "write_zeroes": true, 00:11:50.739 "zcopy": true, 00:11:50.739 "get_zone_info": false, 00:11:50.739 "zone_management": false, 00:11:50.739 "zone_append": false, 00:11:50.739 "compare": false, 00:11:50.739 "compare_and_write": false, 00:11:50.739 "abort": true, 00:11:50.739 "seek_hole": false, 00:11:50.739 "seek_data": false, 00:11:50.739 "copy": true, 00:11:50.739 "nvme_iov_md": false 00:11:50.739 }, 00:11:50.739 "memory_domains": [ 00:11:50.739 { 00:11:50.739 "dma_device_id": "system", 00:11:50.739 "dma_device_type": 1 00:11:50.739 }, 00:11:50.739 { 00:11:50.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.739 "dma_device_type": 2 00:11:50.739 } 00:11:50.739 ], 00:11:50.739 "driver_specific": {} 00:11:50.739 }' 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:50.739 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:51.303 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:51.303 "name": "BaseBdev2", 00:11:51.303 "aliases": [ 00:11:51.303 "a37c950f-42d7-11ef-9ade-d5fc5159efa5" 00:11:51.303 ], 00:11:51.303 "product_name": "Malloc disk", 00:11:51.303 "block_size": 512, 00:11:51.303 "num_blocks": 65536, 00:11:51.303 "uuid": "a37c950f-42d7-11ef-9ade-d5fc5159efa5", 00:11:51.303 "assigned_rate_limits": { 00:11:51.303 "rw_ios_per_sec": 0, 00:11:51.303 "rw_mbytes_per_sec": 0, 00:11:51.303 "r_mbytes_per_sec": 0, 00:11:51.303 "w_mbytes_per_sec": 0 00:11:51.303 }, 00:11:51.303 "claimed": true, 00:11:51.303 "claim_type": "exclusive_write", 00:11:51.303 "zoned": false, 00:11:51.303 "supported_io_types": { 00:11:51.303 "read": true, 00:11:51.303 "write": true, 00:11:51.303 "unmap": true, 00:11:51.303 "flush": true, 00:11:51.303 "reset": true, 00:11:51.304 "nvme_admin": false, 00:11:51.304 "nvme_io": false, 00:11:51.304 "nvme_io_md": false, 00:11:51.304 "write_zeroes": true, 00:11:51.304 "zcopy": true, 00:11:51.304 "get_zone_info": false, 00:11:51.304 "zone_management": false, 00:11:51.304 "zone_append": false, 00:11:51.304 "compare": false, 00:11:51.304 "compare_and_write": false, 00:11:51.304 "abort": true, 00:11:51.304 "seek_hole": false, 00:11:51.304 "seek_data": false, 00:11:51.304 "copy": true, 00:11:51.304 "nvme_iov_md": false 00:11:51.304 }, 00:11:51.304 "memory_domains": [ 00:11:51.304 { 00:11:51.304 "dma_device_id": "system", 00:11:51.304 "dma_device_type": 1 00:11:51.304 }, 00:11:51.304 { 00:11:51.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.304 "dma_device_type": 2 00:11:51.304 } 00:11:51.304 ], 00:11:51.304 "driver_specific": {} 00:11:51.304 }' 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:51.304 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:51.562 "name": "BaseBdev3", 00:11:51.562 "aliases": [ 00:11:51.562 "a44cb965-42d7-11ef-9ade-d5fc5159efa5" 00:11:51.562 ], 00:11:51.562 "product_name": "Malloc disk", 00:11:51.562 "block_size": 512, 00:11:51.562 "num_blocks": 65536, 00:11:51.562 "uuid": "a44cb965-42d7-11ef-9ade-d5fc5159efa5", 00:11:51.562 "assigned_rate_limits": { 00:11:51.562 "rw_ios_per_sec": 0, 00:11:51.562 "rw_mbytes_per_sec": 0, 00:11:51.562 "r_mbytes_per_sec": 0, 00:11:51.562 "w_mbytes_per_sec": 0 00:11:51.562 }, 00:11:51.562 "claimed": true, 00:11:51.562 "claim_type": "exclusive_write", 00:11:51.562 "zoned": false, 00:11:51.562 "supported_io_types": { 00:11:51.562 "read": true, 00:11:51.562 "write": true, 00:11:51.562 "unmap": true, 00:11:51.562 "flush": true, 00:11:51.562 "reset": true, 00:11:51.562 "nvme_admin": false, 00:11:51.562 "nvme_io": false, 00:11:51.562 "nvme_io_md": false, 00:11:51.562 "write_zeroes": true, 00:11:51.562 "zcopy": true, 00:11:51.562 "get_zone_info": false, 00:11:51.562 "zone_management": false, 00:11:51.562 "zone_append": false, 00:11:51.562 "compare": false, 00:11:51.562 "compare_and_write": false, 00:11:51.562 "abort": true, 00:11:51.562 "seek_hole": false, 00:11:51.562 "seek_data": false, 00:11:51.562 "copy": true, 00:11:51.562 "nvme_iov_md": false 00:11:51.562 }, 00:11:51.562 "memory_domains": [ 00:11:51.562 { 00:11:51.562 "dma_device_id": "system", 00:11:51.562 "dma_device_type": 1 00:11:51.562 }, 00:11:51.562 { 00:11:51.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.562 "dma_device_type": 2 00:11:51.562 } 00:11:51.562 ], 00:11:51.562 "driver_specific": {} 00:11:51.562 }' 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:51.562 18:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:51.820 [2024-07-15 18:25:44.038955] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.820 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.078 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.078 "name": "Existed_Raid", 00:11:52.078 "uuid": "a44cc040-42d7-11ef-9ade-d5fc5159efa5", 00:11:52.078 "strip_size_kb": 0, 00:11:52.078 "state": "online", 00:11:52.078 "raid_level": "raid1", 00:11:52.078 "superblock": false, 00:11:52.078 "num_base_bdevs": 3, 00:11:52.078 "num_base_bdevs_discovered": 2, 00:11:52.078 "num_base_bdevs_operational": 2, 00:11:52.078 "base_bdevs_list": [ 00:11:52.078 { 00:11:52.078 "name": null, 00:11:52.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.078 "is_configured": false, 00:11:52.078 "data_offset": 0, 00:11:52.078 "data_size": 65536 00:11:52.078 }, 00:11:52.078 { 00:11:52.078 "name": "BaseBdev2", 00:11:52.078 "uuid": "a37c950f-42d7-11ef-9ade-d5fc5159efa5", 00:11:52.078 "is_configured": true, 00:11:52.078 "data_offset": 0, 00:11:52.078 "data_size": 65536 00:11:52.078 }, 00:11:52.078 { 00:11:52.078 "name": "BaseBdev3", 00:11:52.078 "uuid": "a44cb965-42d7-11ef-9ade-d5fc5159efa5", 00:11:52.078 "is_configured": true, 00:11:52.078 "data_offset": 0, 00:11:52.078 "data_size": 65536 00:11:52.078 } 00:11:52.078 ] 00:11:52.078 }' 00:11:52.078 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.078 18:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.363 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:52.363 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:52.363 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.363 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:52.621 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:52.621 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:52.621 18:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:52.880 [2024-07-15 18:25:45.180746] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:52.880 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:52.880 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:52.880 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.880 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:53.138 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:53.138 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:53.138 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:53.396 [2024-07-15 18:25:45.689087] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:53.396 [2024-07-15 18:25:45.689129] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.396 [2024-07-15 18:25:45.698574] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.396 [2024-07-15 18:25:45.698594] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.396 [2024-07-15 18:25:45.698599] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378cbd034a00 name Existed_Raid, state offline 00:11:53.396 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:53.396 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:53.396 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:53.396 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.681 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:53.681 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:53.681 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:53.681 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:53.681 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:53.681 18:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.939 BaseBdev2 00:11:53.939 18:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:53.939 18:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:53.939 18:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:53.939 18:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:53.939 18:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:53.939 18:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:53.939 18:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.223 18:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.482 [ 00:11:54.482 { 00:11:54.482 "name": "BaseBdev2", 00:11:54.482 "aliases": [ 00:11:54.482 "a734b487-42d7-11ef-9ade-d5fc5159efa5" 00:11:54.482 ], 00:11:54.482 "product_name": "Malloc disk", 00:11:54.482 "block_size": 512, 00:11:54.482 "num_blocks": 65536, 00:11:54.482 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:11:54.482 "assigned_rate_limits": { 00:11:54.482 "rw_ios_per_sec": 0, 00:11:54.482 "rw_mbytes_per_sec": 0, 00:11:54.482 "r_mbytes_per_sec": 0, 00:11:54.482 "w_mbytes_per_sec": 0 00:11:54.482 }, 00:11:54.483 "claimed": false, 00:11:54.483 "zoned": false, 00:11:54.483 "supported_io_types": { 00:11:54.483 "read": true, 00:11:54.483 "write": true, 00:11:54.483 "unmap": true, 00:11:54.483 "flush": true, 00:11:54.483 "reset": true, 00:11:54.483 "nvme_admin": false, 00:11:54.483 "nvme_io": false, 00:11:54.483 "nvme_io_md": false, 00:11:54.483 "write_zeroes": true, 00:11:54.483 "zcopy": true, 00:11:54.483 "get_zone_info": false, 00:11:54.483 "zone_management": false, 00:11:54.483 "zone_append": false, 00:11:54.483 "compare": false, 00:11:54.483 "compare_and_write": false, 00:11:54.483 "abort": true, 00:11:54.483 "seek_hole": false, 00:11:54.483 "seek_data": false, 00:11:54.483 "copy": true, 00:11:54.483 "nvme_iov_md": false 00:11:54.483 }, 00:11:54.483 "memory_domains": [ 00:11:54.483 { 00:11:54.483 "dma_device_id": "system", 00:11:54.483 "dma_device_type": 1 00:11:54.483 }, 00:11:54.483 { 00:11:54.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.483 "dma_device_type": 2 00:11:54.483 } 00:11:54.483 ], 00:11:54.483 "driver_specific": {} 00:11:54.483 } 00:11:54.483 ] 00:11:54.483 18:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:54.483 18:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:54.483 18:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:54.483 18:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.741 BaseBdev3 00:11:54.741 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:54.741 18:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:54.741 18:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:54.741 18:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:54.741 18:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:54.741 18:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:54.741 18:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:55.001 18:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.259 [ 00:11:55.259 { 00:11:55.259 "name": "BaseBdev3", 00:11:55.259 "aliases": [ 00:11:55.259 "a7af645e-42d7-11ef-9ade-d5fc5159efa5" 00:11:55.259 ], 00:11:55.259 "product_name": "Malloc disk", 00:11:55.259 "block_size": 512, 00:11:55.259 "num_blocks": 65536, 00:11:55.259 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:11:55.259 "assigned_rate_limits": { 00:11:55.259 "rw_ios_per_sec": 0, 00:11:55.259 "rw_mbytes_per_sec": 0, 00:11:55.259 "r_mbytes_per_sec": 0, 00:11:55.259 "w_mbytes_per_sec": 0 00:11:55.259 }, 00:11:55.259 "claimed": false, 00:11:55.259 "zoned": false, 00:11:55.259 "supported_io_types": { 00:11:55.259 "read": true, 00:11:55.259 "write": true, 00:11:55.259 "unmap": true, 00:11:55.259 "flush": true, 00:11:55.259 "reset": true, 00:11:55.259 "nvme_admin": false, 00:11:55.259 "nvme_io": false, 00:11:55.259 "nvme_io_md": false, 00:11:55.259 "write_zeroes": true, 00:11:55.259 "zcopy": true, 00:11:55.259 "get_zone_info": false, 00:11:55.259 "zone_management": false, 00:11:55.259 "zone_append": false, 00:11:55.259 "compare": false, 00:11:55.259 "compare_and_write": false, 00:11:55.259 "abort": true, 00:11:55.259 "seek_hole": false, 00:11:55.259 "seek_data": false, 00:11:55.259 "copy": true, 00:11:55.259 "nvme_iov_md": false 00:11:55.259 }, 00:11:55.259 "memory_domains": [ 00:11:55.259 { 00:11:55.259 "dma_device_id": "system", 00:11:55.259 "dma_device_type": 1 00:11:55.259 }, 00:11:55.259 { 00:11:55.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.259 "dma_device_type": 2 00:11:55.259 } 00:11:55.259 ], 00:11:55.259 "driver_specific": {} 00:11:55.259 } 00:11:55.259 ] 00:11:55.259 18:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:55.259 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:55.259 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:55.259 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:55.518 [2024-07-15 18:25:47.866657] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.518 [2024-07-15 18:25:47.866721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.518 [2024-07-15 18:25:47.866730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.518 [2024-07-15 18:25:47.867328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.518 18:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.777 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.777 "name": "Existed_Raid", 00:11:55.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.777 "strip_size_kb": 0, 00:11:55.777 "state": "configuring", 00:11:55.777 "raid_level": "raid1", 00:11:55.777 "superblock": false, 00:11:55.777 "num_base_bdevs": 3, 00:11:55.777 "num_base_bdevs_discovered": 2, 00:11:55.777 "num_base_bdevs_operational": 3, 00:11:55.777 "base_bdevs_list": [ 00:11:55.777 { 00:11:55.777 "name": "BaseBdev1", 00:11:55.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.777 "is_configured": false, 00:11:55.777 "data_offset": 0, 00:11:55.777 "data_size": 0 00:11:55.777 }, 00:11:55.777 { 00:11:55.777 "name": "BaseBdev2", 00:11:55.777 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:11:55.777 "is_configured": true, 00:11:55.777 "data_offset": 0, 00:11:55.777 "data_size": 65536 00:11:55.777 }, 00:11:55.777 { 00:11:55.777 "name": "BaseBdev3", 00:11:55.777 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:11:55.777 "is_configured": true, 00:11:55.777 "data_offset": 0, 00:11:55.777 "data_size": 65536 00:11:55.777 } 00:11:55.777 ] 00:11:55.777 }' 00:11:55.777 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.777 18:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.344 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:56.603 [2024-07-15 18:25:48.746712] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.603 18:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.862 18:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.862 "name": "Existed_Raid", 00:11:56.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.862 "strip_size_kb": 0, 00:11:56.862 "state": "configuring", 00:11:56.862 "raid_level": "raid1", 00:11:56.862 "superblock": false, 00:11:56.862 "num_base_bdevs": 3, 00:11:56.862 "num_base_bdevs_discovered": 1, 00:11:56.862 "num_base_bdevs_operational": 3, 00:11:56.862 "base_bdevs_list": [ 00:11:56.862 { 00:11:56.862 "name": "BaseBdev1", 00:11:56.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.862 "is_configured": false, 00:11:56.862 "data_offset": 0, 00:11:56.862 "data_size": 0 00:11:56.862 }, 00:11:56.862 { 00:11:56.862 "name": null, 00:11:56.862 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:11:56.862 "is_configured": false, 00:11:56.862 "data_offset": 0, 00:11:56.862 "data_size": 65536 00:11:56.862 }, 00:11:56.862 { 00:11:56.863 "name": "BaseBdev3", 00:11:56.863 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:11:56.863 "is_configured": true, 00:11:56.863 "data_offset": 0, 00:11:56.863 "data_size": 65536 00:11:56.863 } 00:11:56.863 ] 00:11:56.863 }' 00:11:56.863 18:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.863 18:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.121 18:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.121 18:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:57.379 18:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:57.379 18:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:57.638 [2024-07-15 18:25:49.778905] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.638 BaseBdev1 00:11:57.638 18:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:57.638 18:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:57.638 18:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:57.638 18:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:57.638 18:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:57.638 18:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:57.638 18:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:57.896 18:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.155 [ 00:11:58.155 { 00:11:58.155 "name": "BaseBdev1", 00:11:58.155 "aliases": [ 00:11:58.155 "a95049ab-42d7-11ef-9ade-d5fc5159efa5" 00:11:58.155 ], 00:11:58.155 "product_name": "Malloc disk", 00:11:58.155 "block_size": 512, 00:11:58.155 "num_blocks": 65536, 00:11:58.155 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:11:58.155 "assigned_rate_limits": { 00:11:58.155 "rw_ios_per_sec": 0, 00:11:58.155 "rw_mbytes_per_sec": 0, 00:11:58.155 "r_mbytes_per_sec": 0, 00:11:58.155 "w_mbytes_per_sec": 0 00:11:58.155 }, 00:11:58.155 "claimed": true, 00:11:58.155 "claim_type": "exclusive_write", 00:11:58.155 "zoned": false, 00:11:58.155 "supported_io_types": { 00:11:58.155 "read": true, 00:11:58.155 "write": true, 00:11:58.155 "unmap": true, 00:11:58.155 "flush": true, 00:11:58.155 "reset": true, 00:11:58.155 "nvme_admin": false, 00:11:58.155 "nvme_io": false, 00:11:58.155 "nvme_io_md": false, 00:11:58.155 "write_zeroes": true, 00:11:58.155 "zcopy": true, 00:11:58.155 "get_zone_info": false, 00:11:58.155 "zone_management": false, 00:11:58.155 "zone_append": false, 00:11:58.155 "compare": false, 00:11:58.155 "compare_and_write": false, 00:11:58.155 "abort": true, 00:11:58.155 "seek_hole": false, 00:11:58.155 "seek_data": false, 00:11:58.155 "copy": true, 00:11:58.155 "nvme_iov_md": false 00:11:58.155 }, 00:11:58.155 "memory_domains": [ 00:11:58.155 { 00:11:58.155 "dma_device_id": "system", 00:11:58.155 "dma_device_type": 1 00:11:58.155 }, 00:11:58.155 { 00:11:58.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.155 "dma_device_type": 2 00:11:58.155 } 00:11:58.155 ], 00:11:58.155 "driver_specific": {} 00:11:58.155 } 00:11:58.155 ] 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.155 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.415 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.415 "name": "Existed_Raid", 00:11:58.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.415 "strip_size_kb": 0, 00:11:58.415 "state": "configuring", 00:11:58.415 "raid_level": "raid1", 00:11:58.415 "superblock": false, 00:11:58.415 "num_base_bdevs": 3, 00:11:58.415 "num_base_bdevs_discovered": 2, 00:11:58.415 "num_base_bdevs_operational": 3, 00:11:58.415 "base_bdevs_list": [ 00:11:58.415 { 00:11:58.415 "name": "BaseBdev1", 00:11:58.415 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:11:58.415 "is_configured": true, 00:11:58.415 "data_offset": 0, 00:11:58.415 "data_size": 65536 00:11:58.415 }, 00:11:58.415 { 00:11:58.415 "name": null, 00:11:58.415 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:11:58.415 "is_configured": false, 00:11:58.415 "data_offset": 0, 00:11:58.415 "data_size": 65536 00:11:58.415 }, 00:11:58.415 { 00:11:58.415 "name": "BaseBdev3", 00:11:58.415 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:11:58.415 "is_configured": true, 00:11:58.415 "data_offset": 0, 00:11:58.415 "data_size": 65536 00:11:58.415 } 00:11:58.415 ] 00:11:58.415 }' 00:11:58.415 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.415 18:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.673 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.673 18:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.931 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:58.932 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:59.190 [2024-07-15 18:25:51.446874] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.190 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.450 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:59.450 "name": "Existed_Raid", 00:11:59.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.450 "strip_size_kb": 0, 00:11:59.450 "state": "configuring", 00:11:59.450 "raid_level": "raid1", 00:11:59.450 "superblock": false, 00:11:59.450 "num_base_bdevs": 3, 00:11:59.450 "num_base_bdevs_discovered": 1, 00:11:59.450 "num_base_bdevs_operational": 3, 00:11:59.450 "base_bdevs_list": [ 00:11:59.450 { 00:11:59.450 "name": "BaseBdev1", 00:11:59.450 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:11:59.450 "is_configured": true, 00:11:59.450 "data_offset": 0, 00:11:59.450 "data_size": 65536 00:11:59.450 }, 00:11:59.450 { 00:11:59.450 "name": null, 00:11:59.450 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:11:59.450 "is_configured": false, 00:11:59.450 "data_offset": 0, 00:11:59.450 "data_size": 65536 00:11:59.450 }, 00:11:59.450 { 00:11:59.450 "name": null, 00:11:59.450 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:11:59.450 "is_configured": false, 00:11:59.450 "data_offset": 0, 00:11:59.450 "data_size": 65536 00:11:59.450 } 00:11:59.450 ] 00:11:59.450 }' 00:11:59.450 18:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:59.450 18:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.708 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.708 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:59.967 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:59.967 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:00.534 [2024-07-15 18:25:52.610951] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.534 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.534 "name": "Existed_Raid", 00:12:00.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.534 "strip_size_kb": 0, 00:12:00.534 "state": "configuring", 00:12:00.534 "raid_level": "raid1", 00:12:00.534 "superblock": false, 00:12:00.535 "num_base_bdevs": 3, 00:12:00.535 "num_base_bdevs_discovered": 2, 00:12:00.535 "num_base_bdevs_operational": 3, 00:12:00.535 "base_bdevs_list": [ 00:12:00.535 { 00:12:00.535 "name": "BaseBdev1", 00:12:00.535 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:12:00.535 "is_configured": true, 00:12:00.535 "data_offset": 0, 00:12:00.535 "data_size": 65536 00:12:00.535 }, 00:12:00.535 { 00:12:00.535 "name": null, 00:12:00.535 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:12:00.535 "is_configured": false, 00:12:00.535 "data_offset": 0, 00:12:00.535 "data_size": 65536 00:12:00.535 }, 00:12:00.535 { 00:12:00.535 "name": "BaseBdev3", 00:12:00.535 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:12:00.535 "is_configured": true, 00:12:00.535 "data_offset": 0, 00:12:00.535 "data_size": 65536 00:12:00.535 } 00:12:00.535 ] 00:12:00.535 }' 00:12:00.535 18:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.535 18:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.103 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.103 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.360 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:01.360 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:01.619 [2024-07-15 18:25:53.795030] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.619 18:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.878 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.878 "name": "Existed_Raid", 00:12:01.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.878 "strip_size_kb": 0, 00:12:01.878 "state": "configuring", 00:12:01.878 "raid_level": "raid1", 00:12:01.878 "superblock": false, 00:12:01.878 "num_base_bdevs": 3, 00:12:01.878 "num_base_bdevs_discovered": 1, 00:12:01.878 "num_base_bdevs_operational": 3, 00:12:01.878 "base_bdevs_list": [ 00:12:01.878 { 00:12:01.878 "name": null, 00:12:01.878 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:12:01.878 "is_configured": false, 00:12:01.878 "data_offset": 0, 00:12:01.878 "data_size": 65536 00:12:01.878 }, 00:12:01.878 { 00:12:01.878 "name": null, 00:12:01.878 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:12:01.878 "is_configured": false, 00:12:01.878 "data_offset": 0, 00:12:01.878 "data_size": 65536 00:12:01.878 }, 00:12:01.878 { 00:12:01.878 "name": "BaseBdev3", 00:12:01.878 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:12:01.878 "is_configured": true, 00:12:01.878 "data_offset": 0, 00:12:01.878 "data_size": 65536 00:12:01.878 } 00:12:01.878 ] 00:12:01.878 }' 00:12:01.878 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.878 18:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.136 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.136 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.394 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:02.394 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:02.654 [2024-07-15 18:25:54.959366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.654 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.937 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.937 "name": "Existed_Raid", 00:12:02.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.937 "strip_size_kb": 0, 00:12:02.937 "state": "configuring", 00:12:02.937 "raid_level": "raid1", 00:12:02.937 "superblock": false, 00:12:02.937 "num_base_bdevs": 3, 00:12:02.937 "num_base_bdevs_discovered": 2, 00:12:02.937 "num_base_bdevs_operational": 3, 00:12:02.937 "base_bdevs_list": [ 00:12:02.937 { 00:12:02.937 "name": null, 00:12:02.937 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:12:02.937 "is_configured": false, 00:12:02.937 "data_offset": 0, 00:12:02.937 "data_size": 65536 00:12:02.937 }, 00:12:02.937 { 00:12:02.937 "name": "BaseBdev2", 00:12:02.937 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:12:02.937 "is_configured": true, 00:12:02.937 "data_offset": 0, 00:12:02.937 "data_size": 65536 00:12:02.937 }, 00:12:02.937 { 00:12:02.937 "name": "BaseBdev3", 00:12:02.937 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:12:02.937 "is_configured": true, 00:12:02.937 "data_offset": 0, 00:12:02.937 "data_size": 65536 00:12:02.937 } 00:12:02.937 ] 00:12:02.937 }' 00:12:02.937 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.937 18:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.504 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:03.504 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.763 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:03.763 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.763 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:04.022 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a95049ab-42d7-11ef-9ade-d5fc5159efa5 00:12:04.281 [2024-07-15 18:25:56.579425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:04.282 [2024-07-15 18:25:56.579458] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x378cbd034f00 00:12:04.282 [2024-07-15 18:25:56.579462] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:04.282 [2024-07-15 18:25:56.579485] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x378cbd097e20 00:12:04.282 [2024-07-15 18:25:56.579559] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x378cbd034f00 00:12:04.282 [2024-07-15 18:25:56.579564] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x378cbd034f00 00:12:04.282 [2024-07-15 18:25:56.579599] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.282 NewBaseBdev 00:12:04.282 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:04.282 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:04.282 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:04.282 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:04.282 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:04.282 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:04.282 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:04.541 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:05.108 [ 00:12:05.108 { 00:12:05.108 "name": "NewBaseBdev", 00:12:05.108 "aliases": [ 00:12:05.108 "a95049ab-42d7-11ef-9ade-d5fc5159efa5" 00:12:05.108 ], 00:12:05.108 "product_name": "Malloc disk", 00:12:05.108 "block_size": 512, 00:12:05.108 "num_blocks": 65536, 00:12:05.108 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.108 "assigned_rate_limits": { 00:12:05.108 "rw_ios_per_sec": 0, 00:12:05.108 "rw_mbytes_per_sec": 0, 00:12:05.108 "r_mbytes_per_sec": 0, 00:12:05.108 "w_mbytes_per_sec": 0 00:12:05.108 }, 00:12:05.108 "claimed": true, 00:12:05.108 "claim_type": "exclusive_write", 00:12:05.108 "zoned": false, 00:12:05.108 "supported_io_types": { 00:12:05.108 "read": true, 00:12:05.108 "write": true, 00:12:05.109 "unmap": true, 00:12:05.109 "flush": true, 00:12:05.109 "reset": true, 00:12:05.109 "nvme_admin": false, 00:12:05.109 "nvme_io": false, 00:12:05.109 "nvme_io_md": false, 00:12:05.109 "write_zeroes": true, 00:12:05.109 "zcopy": true, 00:12:05.109 "get_zone_info": false, 00:12:05.109 "zone_management": false, 00:12:05.109 "zone_append": false, 00:12:05.109 "compare": false, 00:12:05.109 "compare_and_write": false, 00:12:05.109 "abort": true, 00:12:05.109 "seek_hole": false, 00:12:05.109 "seek_data": false, 00:12:05.109 "copy": true, 00:12:05.109 "nvme_iov_md": false 00:12:05.109 }, 00:12:05.109 "memory_domains": [ 00:12:05.109 { 00:12:05.109 "dma_device_id": "system", 00:12:05.109 "dma_device_type": 1 00:12:05.109 }, 00:12:05.109 { 00:12:05.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.109 "dma_device_type": 2 00:12:05.109 } 00:12:05.109 ], 00:12:05.109 "driver_specific": {} 00:12:05.109 } 00:12:05.109 ] 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.109 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.368 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.368 "name": "Existed_Raid", 00:12:05.368 "uuid": "ad5dfd7d-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.368 "strip_size_kb": 0, 00:12:05.368 "state": "online", 00:12:05.368 "raid_level": "raid1", 00:12:05.368 "superblock": false, 00:12:05.368 "num_base_bdevs": 3, 00:12:05.368 "num_base_bdevs_discovered": 3, 00:12:05.368 "num_base_bdevs_operational": 3, 00:12:05.368 "base_bdevs_list": [ 00:12:05.368 { 00:12:05.368 "name": "NewBaseBdev", 00:12:05.368 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.368 "is_configured": true, 00:12:05.368 "data_offset": 0, 00:12:05.368 "data_size": 65536 00:12:05.368 }, 00:12:05.368 { 00:12:05.368 "name": "BaseBdev2", 00:12:05.368 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.368 "is_configured": true, 00:12:05.368 "data_offset": 0, 00:12:05.368 "data_size": 65536 00:12:05.368 }, 00:12:05.368 { 00:12:05.368 "name": "BaseBdev3", 00:12:05.368 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.368 "is_configured": true, 00:12:05.368 "data_offset": 0, 00:12:05.368 "data_size": 65536 00:12:05.368 } 00:12:05.368 ] 00:12:05.368 }' 00:12:05.368 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.368 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.626 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.626 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:05.626 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:05.626 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:05.626 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:05.626 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:05.626 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:05.626 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:05.885 [2024-07-15 18:25:58.243244] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.885 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:05.885 "name": "Existed_Raid", 00:12:05.885 "aliases": [ 00:12:05.885 "ad5dfd7d-42d7-11ef-9ade-d5fc5159efa5" 00:12:05.885 ], 00:12:05.885 "product_name": "Raid Volume", 00:12:05.885 "block_size": 512, 00:12:05.885 "num_blocks": 65536, 00:12:05.885 "uuid": "ad5dfd7d-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.885 "assigned_rate_limits": { 00:12:05.885 "rw_ios_per_sec": 0, 00:12:05.885 "rw_mbytes_per_sec": 0, 00:12:05.885 "r_mbytes_per_sec": 0, 00:12:05.885 "w_mbytes_per_sec": 0 00:12:05.885 }, 00:12:05.885 "claimed": false, 00:12:05.885 "zoned": false, 00:12:05.885 "supported_io_types": { 00:12:05.885 "read": true, 00:12:05.885 "write": true, 00:12:05.885 "unmap": false, 00:12:05.885 "flush": false, 00:12:05.885 "reset": true, 00:12:05.885 "nvme_admin": false, 00:12:05.885 "nvme_io": false, 00:12:05.885 "nvme_io_md": false, 00:12:05.885 "write_zeroes": true, 00:12:05.885 "zcopy": false, 00:12:05.885 "get_zone_info": false, 00:12:05.885 "zone_management": false, 00:12:05.885 "zone_append": false, 00:12:05.885 "compare": false, 00:12:05.885 "compare_and_write": false, 00:12:05.885 "abort": false, 00:12:05.885 "seek_hole": false, 00:12:05.885 "seek_data": false, 00:12:05.885 "copy": false, 00:12:05.885 "nvme_iov_md": false 00:12:05.885 }, 00:12:05.885 "memory_domains": [ 00:12:05.885 { 00:12:05.885 "dma_device_id": "system", 00:12:05.885 "dma_device_type": 1 00:12:05.885 }, 00:12:05.885 { 00:12:05.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.885 "dma_device_type": 2 00:12:05.885 }, 00:12:05.885 { 00:12:05.885 "dma_device_id": "system", 00:12:05.885 "dma_device_type": 1 00:12:05.885 }, 00:12:05.885 { 00:12:05.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.885 "dma_device_type": 2 00:12:05.885 }, 00:12:05.885 { 00:12:05.885 "dma_device_id": "system", 00:12:05.885 "dma_device_type": 1 00:12:05.885 }, 00:12:05.885 { 00:12:05.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.885 "dma_device_type": 2 00:12:05.885 } 00:12:05.885 ], 00:12:05.885 "driver_specific": { 00:12:05.885 "raid": { 00:12:05.885 "uuid": "ad5dfd7d-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.885 "strip_size_kb": 0, 00:12:05.885 "state": "online", 00:12:05.885 "raid_level": "raid1", 00:12:05.885 "superblock": false, 00:12:05.885 "num_base_bdevs": 3, 00:12:05.885 "num_base_bdevs_discovered": 3, 00:12:05.885 "num_base_bdevs_operational": 3, 00:12:05.885 "base_bdevs_list": [ 00:12:05.885 { 00:12:05.885 "name": "NewBaseBdev", 00:12:05.885 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.885 "is_configured": true, 00:12:05.885 "data_offset": 0, 00:12:05.885 "data_size": 65536 00:12:05.885 }, 00:12:05.885 { 00:12:05.885 "name": "BaseBdev2", 00:12:05.885 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.885 "is_configured": true, 00:12:05.885 "data_offset": 0, 00:12:05.885 "data_size": 65536 00:12:05.885 }, 00:12:05.885 { 00:12:05.885 "name": "BaseBdev3", 00:12:05.885 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:12:05.885 "is_configured": true, 00:12:05.885 "data_offset": 0, 00:12:05.885 "data_size": 65536 00:12:05.885 } 00:12:05.885 ] 00:12:05.885 } 00:12:05.885 } 00:12:05.885 }' 00:12:06.145 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.145 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:06.145 BaseBdev2 00:12:06.145 BaseBdev3' 00:12:06.145 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.145 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:06.145 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.403 "name": "NewBaseBdev", 00:12:06.403 "aliases": [ 00:12:06.403 "a95049ab-42d7-11ef-9ade-d5fc5159efa5" 00:12:06.403 ], 00:12:06.403 "product_name": "Malloc disk", 00:12:06.403 "block_size": 512, 00:12:06.403 "num_blocks": 65536, 00:12:06.403 "uuid": "a95049ab-42d7-11ef-9ade-d5fc5159efa5", 00:12:06.403 "assigned_rate_limits": { 00:12:06.403 "rw_ios_per_sec": 0, 00:12:06.403 "rw_mbytes_per_sec": 0, 00:12:06.403 "r_mbytes_per_sec": 0, 00:12:06.403 "w_mbytes_per_sec": 0 00:12:06.403 }, 00:12:06.403 "claimed": true, 00:12:06.403 "claim_type": "exclusive_write", 00:12:06.403 "zoned": false, 00:12:06.403 "supported_io_types": { 00:12:06.403 "read": true, 00:12:06.403 "write": true, 00:12:06.403 "unmap": true, 00:12:06.403 "flush": true, 00:12:06.403 "reset": true, 00:12:06.403 "nvme_admin": false, 00:12:06.403 "nvme_io": false, 00:12:06.403 "nvme_io_md": false, 00:12:06.403 "write_zeroes": true, 00:12:06.403 "zcopy": true, 00:12:06.403 "get_zone_info": false, 00:12:06.403 "zone_management": false, 00:12:06.403 "zone_append": false, 00:12:06.403 "compare": false, 00:12:06.403 "compare_and_write": false, 00:12:06.403 "abort": true, 00:12:06.403 "seek_hole": false, 00:12:06.403 "seek_data": false, 00:12:06.403 "copy": true, 00:12:06.403 "nvme_iov_md": false 00:12:06.403 }, 00:12:06.403 "memory_domains": [ 00:12:06.403 { 00:12:06.403 "dma_device_id": "system", 00:12:06.403 "dma_device_type": 1 00:12:06.403 }, 00:12:06.403 { 00:12:06.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.403 "dma_device_type": 2 00:12:06.403 } 00:12:06.403 ], 00:12:06.403 "driver_specific": {} 00:12:06.403 }' 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.403 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.404 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:06.404 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.662 "name": "BaseBdev2", 00:12:06.662 "aliases": [ 00:12:06.662 "a734b487-42d7-11ef-9ade-d5fc5159efa5" 00:12:06.662 ], 00:12:06.662 "product_name": "Malloc disk", 00:12:06.662 "block_size": 512, 00:12:06.662 "num_blocks": 65536, 00:12:06.662 "uuid": "a734b487-42d7-11ef-9ade-d5fc5159efa5", 00:12:06.662 "assigned_rate_limits": { 00:12:06.662 "rw_ios_per_sec": 0, 00:12:06.662 "rw_mbytes_per_sec": 0, 00:12:06.662 "r_mbytes_per_sec": 0, 00:12:06.662 "w_mbytes_per_sec": 0 00:12:06.662 }, 00:12:06.662 "claimed": true, 00:12:06.662 "claim_type": "exclusive_write", 00:12:06.662 "zoned": false, 00:12:06.662 "supported_io_types": { 00:12:06.662 "read": true, 00:12:06.662 "write": true, 00:12:06.662 "unmap": true, 00:12:06.662 "flush": true, 00:12:06.662 "reset": true, 00:12:06.662 "nvme_admin": false, 00:12:06.662 "nvme_io": false, 00:12:06.662 "nvme_io_md": false, 00:12:06.662 "write_zeroes": true, 00:12:06.662 "zcopy": true, 00:12:06.662 "get_zone_info": false, 00:12:06.662 "zone_management": false, 00:12:06.662 "zone_append": false, 00:12:06.662 "compare": false, 00:12:06.662 "compare_and_write": false, 00:12:06.662 "abort": true, 00:12:06.662 "seek_hole": false, 00:12:06.662 "seek_data": false, 00:12:06.662 "copy": true, 00:12:06.662 "nvme_iov_md": false 00:12:06.662 }, 00:12:06.662 "memory_domains": [ 00:12:06.662 { 00:12:06.662 "dma_device_id": "system", 00:12:06.662 "dma_device_type": 1 00:12:06.662 }, 00:12:06.662 { 00:12:06.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.662 "dma_device_type": 2 00:12:06.662 } 00:12:06.662 ], 00:12:06.662 "driver_specific": {} 00:12:06.662 }' 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:06.662 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.921 "name": "BaseBdev3", 00:12:06.921 "aliases": [ 00:12:06.921 "a7af645e-42d7-11ef-9ade-d5fc5159efa5" 00:12:06.921 ], 00:12:06.921 "product_name": "Malloc disk", 00:12:06.921 "block_size": 512, 00:12:06.921 "num_blocks": 65536, 00:12:06.921 "uuid": "a7af645e-42d7-11ef-9ade-d5fc5159efa5", 00:12:06.921 "assigned_rate_limits": { 00:12:06.921 "rw_ios_per_sec": 0, 00:12:06.921 "rw_mbytes_per_sec": 0, 00:12:06.921 "r_mbytes_per_sec": 0, 00:12:06.921 "w_mbytes_per_sec": 0 00:12:06.921 }, 00:12:06.921 "claimed": true, 00:12:06.921 "claim_type": "exclusive_write", 00:12:06.921 "zoned": false, 00:12:06.921 "supported_io_types": { 00:12:06.921 "read": true, 00:12:06.921 "write": true, 00:12:06.921 "unmap": true, 00:12:06.921 "flush": true, 00:12:06.921 "reset": true, 00:12:06.921 "nvme_admin": false, 00:12:06.921 "nvme_io": false, 00:12:06.921 "nvme_io_md": false, 00:12:06.921 "write_zeroes": true, 00:12:06.921 "zcopy": true, 00:12:06.921 "get_zone_info": false, 00:12:06.921 "zone_management": false, 00:12:06.921 "zone_append": false, 00:12:06.921 "compare": false, 00:12:06.921 "compare_and_write": false, 00:12:06.921 "abort": true, 00:12:06.921 "seek_hole": false, 00:12:06.921 "seek_data": false, 00:12:06.921 "copy": true, 00:12:06.921 "nvme_iov_md": false 00:12:06.921 }, 00:12:06.921 "memory_domains": [ 00:12:06.921 { 00:12:06.921 "dma_device_id": "system", 00:12:06.921 "dma_device_type": 1 00:12:06.921 }, 00:12:06.921 { 00:12:06.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.921 "dma_device_type": 2 00:12:06.921 } 00:12:06.921 ], 00:12:06.921 "driver_specific": {} 00:12:06.921 }' 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.921 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:07.200 [2024-07-15 18:25:59.543141] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:07.200 [2024-07-15 18:25:59.543186] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.200 [2024-07-15 18:25:59.543217] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.200 [2024-07-15 18:25:59.543309] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.200 [2024-07-15 18:25:59.543316] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378cbd034f00 name Existed_Raid, state offline 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 56124 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 56124 ']' 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 56124 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 56124 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:07.200 killing process with pid 56124 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56124' 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 56124 00:12:07.200 [2024-07-15 18:25:59.575239] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.200 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 56124 00:12:07.459 [2024-07-15 18:25:59.597904] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.459 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:07.459 00:12:07.459 real 0m24.933s 00:12:07.459 user 0m45.478s 00:12:07.459 sys 0m3.506s 00:12:07.459 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.459 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.459 ************************************ 00:12:07.459 END TEST raid_state_function_test 00:12:07.459 ************************************ 00:12:07.717 18:25:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:07.717 18:25:59 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:07.717 18:25:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:07.717 18:25:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.717 18:25:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.717 ************************************ 00:12:07.717 START TEST raid_state_function_test_sb 00:12:07.717 ************************************ 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=56857 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:07.717 Process raid pid: 56857 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56857' 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 56857 /var/tmp/spdk-raid.sock 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 56857 ']' 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.717 18:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.717 [2024-07-15 18:25:59.881184] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:07.717 [2024-07-15 18:25:59.881392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:08.282 EAL: TSC is not safe to use in SMP mode 00:12:08.282 EAL: TSC is not invariant 00:12:08.282 [2024-07-15 18:26:00.483804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.282 [2024-07-15 18:26:00.609204] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:08.282 [2024-07-15 18:26:00.611869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.282 [2024-07-15 18:26:00.612865] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.282 [2024-07-15 18:26:00.612890] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.846 18:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.846 18:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:08.846 18:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:09.106 [2024-07-15 18:26:01.230538] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.106 [2024-07-15 18:26:01.230598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.106 [2024-07-15 18:26:01.230604] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.106 [2024-07-15 18:26:01.230613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.106 [2024-07-15 18:26:01.230617] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.106 [2024-07-15 18:26:01.230624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.106 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.365 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.365 "name": "Existed_Raid", 00:12:09.365 "uuid": "b023af61-42d7-11ef-9ade-d5fc5159efa5", 00:12:09.365 "strip_size_kb": 0, 00:12:09.365 "state": "configuring", 00:12:09.365 "raid_level": "raid1", 00:12:09.365 "superblock": true, 00:12:09.365 "num_base_bdevs": 3, 00:12:09.365 "num_base_bdevs_discovered": 0, 00:12:09.365 "num_base_bdevs_operational": 3, 00:12:09.365 "base_bdevs_list": [ 00:12:09.365 { 00:12:09.365 "name": "BaseBdev1", 00:12:09.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.365 "is_configured": false, 00:12:09.365 "data_offset": 0, 00:12:09.365 "data_size": 0 00:12:09.365 }, 00:12:09.365 { 00:12:09.365 "name": "BaseBdev2", 00:12:09.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.365 "is_configured": false, 00:12:09.365 "data_offset": 0, 00:12:09.365 "data_size": 0 00:12:09.365 }, 00:12:09.365 { 00:12:09.365 "name": "BaseBdev3", 00:12:09.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.365 "is_configured": false, 00:12:09.365 "data_offset": 0, 00:12:09.365 "data_size": 0 00:12:09.365 } 00:12:09.365 ] 00:12:09.365 }' 00:12:09.365 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.365 18:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.623 18:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:09.888 [2024-07-15 18:26:02.110481] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.888 [2024-07-15 18:26:02.110516] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e2c7ba34500 name Existed_Raid, state configuring 00:12:09.888 18:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:10.146 [2024-07-15 18:26:02.358478] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.146 [2024-07-15 18:26:02.358536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.146 [2024-07-15 18:26:02.358542] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.146 [2024-07-15 18:26:02.358551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.146 [2024-07-15 18:26:02.358555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:10.146 [2024-07-15 18:26:02.358563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:10.146 18:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:10.403 [2024-07-15 18:26:02.603528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.403 BaseBdev1 00:12:10.403 18:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:10.403 18:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:10.403 18:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:10.403 18:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:10.403 18:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:10.403 18:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:10.403 18:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:10.662 18:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:10.920 [ 00:12:10.920 { 00:12:10.920 "name": "BaseBdev1", 00:12:10.920 "aliases": [ 00:12:10.920 "b0f506ed-42d7-11ef-9ade-d5fc5159efa5" 00:12:10.920 ], 00:12:10.920 "product_name": "Malloc disk", 00:12:10.920 "block_size": 512, 00:12:10.920 "num_blocks": 65536, 00:12:10.920 "uuid": "b0f506ed-42d7-11ef-9ade-d5fc5159efa5", 00:12:10.920 "assigned_rate_limits": { 00:12:10.920 "rw_ios_per_sec": 0, 00:12:10.920 "rw_mbytes_per_sec": 0, 00:12:10.920 "r_mbytes_per_sec": 0, 00:12:10.920 "w_mbytes_per_sec": 0 00:12:10.920 }, 00:12:10.920 "claimed": true, 00:12:10.920 "claim_type": "exclusive_write", 00:12:10.920 "zoned": false, 00:12:10.920 "supported_io_types": { 00:12:10.920 "read": true, 00:12:10.920 "write": true, 00:12:10.920 "unmap": true, 00:12:10.920 "flush": true, 00:12:10.920 "reset": true, 00:12:10.920 "nvme_admin": false, 00:12:10.920 "nvme_io": false, 00:12:10.920 "nvme_io_md": false, 00:12:10.920 "write_zeroes": true, 00:12:10.920 "zcopy": true, 00:12:10.920 "get_zone_info": false, 00:12:10.920 "zone_management": false, 00:12:10.920 "zone_append": false, 00:12:10.920 "compare": false, 00:12:10.920 "compare_and_write": false, 00:12:10.920 "abort": true, 00:12:10.920 "seek_hole": false, 00:12:10.920 "seek_data": false, 00:12:10.920 "copy": true, 00:12:10.920 "nvme_iov_md": false 00:12:10.920 }, 00:12:10.920 "memory_domains": [ 00:12:10.920 { 00:12:10.920 "dma_device_id": "system", 00:12:10.920 "dma_device_type": 1 00:12:10.920 }, 00:12:10.920 { 00:12:10.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.920 "dma_device_type": 2 00:12:10.920 } 00:12:10.920 ], 00:12:10.920 "driver_specific": {} 00:12:10.920 } 00:12:10.920 ] 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.920 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.179 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.179 "name": "Existed_Raid", 00:12:11.179 "uuid": "b0cfcbca-42d7-11ef-9ade-d5fc5159efa5", 00:12:11.179 "strip_size_kb": 0, 00:12:11.179 "state": "configuring", 00:12:11.179 "raid_level": "raid1", 00:12:11.179 "superblock": true, 00:12:11.179 "num_base_bdevs": 3, 00:12:11.179 "num_base_bdevs_discovered": 1, 00:12:11.179 "num_base_bdevs_operational": 3, 00:12:11.179 "base_bdevs_list": [ 00:12:11.179 { 00:12:11.179 "name": "BaseBdev1", 00:12:11.179 "uuid": "b0f506ed-42d7-11ef-9ade-d5fc5159efa5", 00:12:11.179 "is_configured": true, 00:12:11.179 "data_offset": 2048, 00:12:11.179 "data_size": 63488 00:12:11.179 }, 00:12:11.179 { 00:12:11.179 "name": "BaseBdev2", 00:12:11.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.179 "is_configured": false, 00:12:11.179 "data_offset": 0, 00:12:11.179 "data_size": 0 00:12:11.179 }, 00:12:11.179 { 00:12:11.179 "name": "BaseBdev3", 00:12:11.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.179 "is_configured": false, 00:12:11.179 "data_offset": 0, 00:12:11.179 "data_size": 0 00:12:11.179 } 00:12:11.179 ] 00:12:11.179 }' 00:12:11.179 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.179 18:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.437 18:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:12.006 [2024-07-15 18:26:04.086415] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.006 [2024-07-15 18:26:04.086454] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e2c7ba34500 name Existed_Raid, state configuring 00:12:12.006 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:12.264 [2024-07-15 18:26:04.394430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.264 [2024-07-15 18:26:04.395334] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.264 [2024-07-15 18:26:04.395375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.264 [2024-07-15 18:26:04.395381] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.264 [2024-07-15 18:26:04.395389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.264 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.522 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.522 "name": "Existed_Raid", 00:12:12.522 "uuid": "b20674f6-42d7-11ef-9ade-d5fc5159efa5", 00:12:12.522 "strip_size_kb": 0, 00:12:12.522 "state": "configuring", 00:12:12.522 "raid_level": "raid1", 00:12:12.522 "superblock": true, 00:12:12.522 "num_base_bdevs": 3, 00:12:12.522 "num_base_bdevs_discovered": 1, 00:12:12.522 "num_base_bdevs_operational": 3, 00:12:12.522 "base_bdevs_list": [ 00:12:12.522 { 00:12:12.522 "name": "BaseBdev1", 00:12:12.522 "uuid": "b0f506ed-42d7-11ef-9ade-d5fc5159efa5", 00:12:12.522 "is_configured": true, 00:12:12.522 "data_offset": 2048, 00:12:12.522 "data_size": 63488 00:12:12.522 }, 00:12:12.522 { 00:12:12.522 "name": "BaseBdev2", 00:12:12.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.522 "is_configured": false, 00:12:12.522 "data_offset": 0, 00:12:12.522 "data_size": 0 00:12:12.522 }, 00:12:12.522 { 00:12:12.522 "name": "BaseBdev3", 00:12:12.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.522 "is_configured": false, 00:12:12.522 "data_offset": 0, 00:12:12.522 "data_size": 0 00:12:12.522 } 00:12:12.522 ] 00:12:12.522 }' 00:12:12.522 18:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.522 18:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.779 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.037 [2024-07-15 18:26:05.314558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.037 BaseBdev2 00:12:13.037 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:13.037 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:13.037 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:13.037 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:13.037 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:13.037 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:13.037 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:13.295 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.553 [ 00:12:13.553 { 00:12:13.553 "name": "BaseBdev2", 00:12:13.553 "aliases": [ 00:12:13.553 "b292d5e4-42d7-11ef-9ade-d5fc5159efa5" 00:12:13.553 ], 00:12:13.553 "product_name": "Malloc disk", 00:12:13.553 "block_size": 512, 00:12:13.553 "num_blocks": 65536, 00:12:13.553 "uuid": "b292d5e4-42d7-11ef-9ade-d5fc5159efa5", 00:12:13.553 "assigned_rate_limits": { 00:12:13.553 "rw_ios_per_sec": 0, 00:12:13.553 "rw_mbytes_per_sec": 0, 00:12:13.553 "r_mbytes_per_sec": 0, 00:12:13.553 "w_mbytes_per_sec": 0 00:12:13.553 }, 00:12:13.553 "claimed": true, 00:12:13.553 "claim_type": "exclusive_write", 00:12:13.553 "zoned": false, 00:12:13.553 "supported_io_types": { 00:12:13.553 "read": true, 00:12:13.553 "write": true, 00:12:13.553 "unmap": true, 00:12:13.553 "flush": true, 00:12:13.553 "reset": true, 00:12:13.553 "nvme_admin": false, 00:12:13.553 "nvme_io": false, 00:12:13.553 "nvme_io_md": false, 00:12:13.553 "write_zeroes": true, 00:12:13.553 "zcopy": true, 00:12:13.553 "get_zone_info": false, 00:12:13.553 "zone_management": false, 00:12:13.553 "zone_append": false, 00:12:13.553 "compare": false, 00:12:13.553 "compare_and_write": false, 00:12:13.553 "abort": true, 00:12:13.553 "seek_hole": false, 00:12:13.553 "seek_data": false, 00:12:13.553 "copy": true, 00:12:13.553 "nvme_iov_md": false 00:12:13.553 }, 00:12:13.553 "memory_domains": [ 00:12:13.553 { 00:12:13.553 "dma_device_id": "system", 00:12:13.553 "dma_device_type": 1 00:12:13.553 }, 00:12:13.553 { 00:12:13.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.553 "dma_device_type": 2 00:12:13.553 } 00:12:13.553 ], 00:12:13.553 "driver_specific": {} 00:12:13.553 } 00:12:13.553 ] 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.553 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.813 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.813 "name": "Existed_Raid", 00:12:13.813 "uuid": "b20674f6-42d7-11ef-9ade-d5fc5159efa5", 00:12:13.813 "strip_size_kb": 0, 00:12:13.813 "state": "configuring", 00:12:13.813 "raid_level": "raid1", 00:12:13.813 "superblock": true, 00:12:13.813 "num_base_bdevs": 3, 00:12:13.813 "num_base_bdevs_discovered": 2, 00:12:13.813 "num_base_bdevs_operational": 3, 00:12:13.813 "base_bdevs_list": [ 00:12:13.813 { 00:12:13.813 "name": "BaseBdev1", 00:12:13.813 "uuid": "b0f506ed-42d7-11ef-9ade-d5fc5159efa5", 00:12:13.813 "is_configured": true, 00:12:13.813 "data_offset": 2048, 00:12:13.813 "data_size": 63488 00:12:13.813 }, 00:12:13.813 { 00:12:13.813 "name": "BaseBdev2", 00:12:13.813 "uuid": "b292d5e4-42d7-11ef-9ade-d5fc5159efa5", 00:12:13.813 "is_configured": true, 00:12:13.813 "data_offset": 2048, 00:12:13.813 "data_size": 63488 00:12:13.813 }, 00:12:13.813 { 00:12:13.813 "name": "BaseBdev3", 00:12:13.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.813 "is_configured": false, 00:12:13.813 "data_offset": 0, 00:12:13.813 "data_size": 0 00:12:13.813 } 00:12:13.813 ] 00:12:13.813 }' 00:12:13.813 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.813 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.379 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.637 [2024-07-15 18:26:06.786533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.637 [2024-07-15 18:26:06.786609] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e2c7ba34a00 00:12:14.637 [2024-07-15 18:26:06.786616] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.637 [2024-07-15 18:26:06.786639] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e2c7ba97e20 00:12:14.637 [2024-07-15 18:26:06.786699] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e2c7ba34a00 00:12:14.637 [2024-07-15 18:26:06.786703] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3e2c7ba34a00 00:12:14.637 [2024-07-15 18:26:06.786725] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.637 BaseBdev3 00:12:14.637 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:14.637 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:14.637 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:14.637 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:14.637 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:14.637 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:14.637 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:14.895 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.155 [ 00:12:15.155 { 00:12:15.155 "name": "BaseBdev3", 00:12:15.155 "aliases": [ 00:12:15.155 "b373719f-42d7-11ef-9ade-d5fc5159efa5" 00:12:15.155 ], 00:12:15.155 "product_name": "Malloc disk", 00:12:15.155 "block_size": 512, 00:12:15.155 "num_blocks": 65536, 00:12:15.155 "uuid": "b373719f-42d7-11ef-9ade-d5fc5159efa5", 00:12:15.155 "assigned_rate_limits": { 00:12:15.155 "rw_ios_per_sec": 0, 00:12:15.155 "rw_mbytes_per_sec": 0, 00:12:15.155 "r_mbytes_per_sec": 0, 00:12:15.155 "w_mbytes_per_sec": 0 00:12:15.155 }, 00:12:15.155 "claimed": true, 00:12:15.155 "claim_type": "exclusive_write", 00:12:15.155 "zoned": false, 00:12:15.155 "supported_io_types": { 00:12:15.155 "read": true, 00:12:15.155 "write": true, 00:12:15.155 "unmap": true, 00:12:15.155 "flush": true, 00:12:15.155 "reset": true, 00:12:15.155 "nvme_admin": false, 00:12:15.155 "nvme_io": false, 00:12:15.155 "nvme_io_md": false, 00:12:15.155 "write_zeroes": true, 00:12:15.155 "zcopy": true, 00:12:15.155 "get_zone_info": false, 00:12:15.155 "zone_management": false, 00:12:15.155 "zone_append": false, 00:12:15.155 "compare": false, 00:12:15.155 "compare_and_write": false, 00:12:15.155 "abort": true, 00:12:15.155 "seek_hole": false, 00:12:15.155 "seek_data": false, 00:12:15.155 "copy": true, 00:12:15.155 "nvme_iov_md": false 00:12:15.155 }, 00:12:15.155 "memory_domains": [ 00:12:15.155 { 00:12:15.155 "dma_device_id": "system", 00:12:15.155 "dma_device_type": 1 00:12:15.155 }, 00:12:15.155 { 00:12:15.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.155 "dma_device_type": 2 00:12:15.155 } 00:12:15.155 ], 00:12:15.155 "driver_specific": {} 00:12:15.155 } 00:12:15.155 ] 00:12:15.155 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:15.155 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:15.155 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:15.155 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:15.155 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:15.155 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:15.155 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:15.155 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:15.156 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:15.156 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:15.156 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:15.156 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:15.156 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:15.156 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.156 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.413 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:15.413 "name": "Existed_Raid", 00:12:15.413 "uuid": "b20674f6-42d7-11ef-9ade-d5fc5159efa5", 00:12:15.413 "strip_size_kb": 0, 00:12:15.413 "state": "online", 00:12:15.413 "raid_level": "raid1", 00:12:15.413 "superblock": true, 00:12:15.413 "num_base_bdevs": 3, 00:12:15.413 "num_base_bdevs_discovered": 3, 00:12:15.413 "num_base_bdevs_operational": 3, 00:12:15.413 "base_bdevs_list": [ 00:12:15.413 { 00:12:15.413 "name": "BaseBdev1", 00:12:15.413 "uuid": "b0f506ed-42d7-11ef-9ade-d5fc5159efa5", 00:12:15.413 "is_configured": true, 00:12:15.413 "data_offset": 2048, 00:12:15.413 "data_size": 63488 00:12:15.413 }, 00:12:15.413 { 00:12:15.413 "name": "BaseBdev2", 00:12:15.413 "uuid": "b292d5e4-42d7-11ef-9ade-d5fc5159efa5", 00:12:15.413 "is_configured": true, 00:12:15.413 "data_offset": 2048, 00:12:15.413 "data_size": 63488 00:12:15.413 }, 00:12:15.413 { 00:12:15.413 "name": "BaseBdev3", 00:12:15.413 "uuid": "b373719f-42d7-11ef-9ade-d5fc5159efa5", 00:12:15.413 "is_configured": true, 00:12:15.413 "data_offset": 2048, 00:12:15.413 "data_size": 63488 00:12:15.413 } 00:12:15.413 ] 00:12:15.413 }' 00:12:15.413 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:15.413 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.980 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:15.980 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:15.980 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:15.980 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:15.980 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:15.980 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:15.980 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:15.980 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:15.980 [2024-07-15 18:26:08.354380] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.238 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:16.238 "name": "Existed_Raid", 00:12:16.238 "aliases": [ 00:12:16.238 "b20674f6-42d7-11ef-9ade-d5fc5159efa5" 00:12:16.238 ], 00:12:16.238 "product_name": "Raid Volume", 00:12:16.238 "block_size": 512, 00:12:16.238 "num_blocks": 63488, 00:12:16.238 "uuid": "b20674f6-42d7-11ef-9ade-d5fc5159efa5", 00:12:16.238 "assigned_rate_limits": { 00:12:16.238 "rw_ios_per_sec": 0, 00:12:16.238 "rw_mbytes_per_sec": 0, 00:12:16.238 "r_mbytes_per_sec": 0, 00:12:16.238 "w_mbytes_per_sec": 0 00:12:16.238 }, 00:12:16.238 "claimed": false, 00:12:16.238 "zoned": false, 00:12:16.238 "supported_io_types": { 00:12:16.238 "read": true, 00:12:16.238 "write": true, 00:12:16.238 "unmap": false, 00:12:16.238 "flush": false, 00:12:16.238 "reset": true, 00:12:16.238 "nvme_admin": false, 00:12:16.238 "nvme_io": false, 00:12:16.238 "nvme_io_md": false, 00:12:16.239 "write_zeroes": true, 00:12:16.239 "zcopy": false, 00:12:16.239 "get_zone_info": false, 00:12:16.239 "zone_management": false, 00:12:16.239 "zone_append": false, 00:12:16.239 "compare": false, 00:12:16.239 "compare_and_write": false, 00:12:16.239 "abort": false, 00:12:16.239 "seek_hole": false, 00:12:16.239 "seek_data": false, 00:12:16.239 "copy": false, 00:12:16.239 "nvme_iov_md": false 00:12:16.239 }, 00:12:16.239 "memory_domains": [ 00:12:16.239 { 00:12:16.239 "dma_device_id": "system", 00:12:16.239 "dma_device_type": 1 00:12:16.239 }, 00:12:16.239 { 00:12:16.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.239 "dma_device_type": 2 00:12:16.239 }, 00:12:16.239 { 00:12:16.239 "dma_device_id": "system", 00:12:16.239 "dma_device_type": 1 00:12:16.239 }, 00:12:16.239 { 00:12:16.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.239 "dma_device_type": 2 00:12:16.239 }, 00:12:16.239 { 00:12:16.239 "dma_device_id": "system", 00:12:16.239 "dma_device_type": 1 00:12:16.239 }, 00:12:16.239 { 00:12:16.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.239 "dma_device_type": 2 00:12:16.239 } 00:12:16.239 ], 00:12:16.239 "driver_specific": { 00:12:16.239 "raid": { 00:12:16.239 "uuid": "b20674f6-42d7-11ef-9ade-d5fc5159efa5", 00:12:16.239 "strip_size_kb": 0, 00:12:16.239 "state": "online", 00:12:16.239 "raid_level": "raid1", 00:12:16.239 "superblock": true, 00:12:16.239 "num_base_bdevs": 3, 00:12:16.239 "num_base_bdevs_discovered": 3, 00:12:16.239 "num_base_bdevs_operational": 3, 00:12:16.239 "base_bdevs_list": [ 00:12:16.239 { 00:12:16.239 "name": "BaseBdev1", 00:12:16.239 "uuid": "b0f506ed-42d7-11ef-9ade-d5fc5159efa5", 00:12:16.239 "is_configured": true, 00:12:16.239 "data_offset": 2048, 00:12:16.239 "data_size": 63488 00:12:16.239 }, 00:12:16.239 { 00:12:16.239 "name": "BaseBdev2", 00:12:16.239 "uuid": "b292d5e4-42d7-11ef-9ade-d5fc5159efa5", 00:12:16.239 "is_configured": true, 00:12:16.239 "data_offset": 2048, 00:12:16.239 "data_size": 63488 00:12:16.239 }, 00:12:16.239 { 00:12:16.239 "name": "BaseBdev3", 00:12:16.239 "uuid": "b373719f-42d7-11ef-9ade-d5fc5159efa5", 00:12:16.239 "is_configured": true, 00:12:16.239 "data_offset": 2048, 00:12:16.239 "data_size": 63488 00:12:16.239 } 00:12:16.239 ] 00:12:16.239 } 00:12:16.239 } 00:12:16.239 }' 00:12:16.239 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.239 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:16.239 BaseBdev2 00:12:16.239 BaseBdev3' 00:12:16.239 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.239 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:16.239 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.497 "name": "BaseBdev1", 00:12:16.497 "aliases": [ 00:12:16.497 "b0f506ed-42d7-11ef-9ade-d5fc5159efa5" 00:12:16.497 ], 00:12:16.497 "product_name": "Malloc disk", 00:12:16.497 "block_size": 512, 00:12:16.497 "num_blocks": 65536, 00:12:16.497 "uuid": "b0f506ed-42d7-11ef-9ade-d5fc5159efa5", 00:12:16.497 "assigned_rate_limits": { 00:12:16.497 "rw_ios_per_sec": 0, 00:12:16.497 "rw_mbytes_per_sec": 0, 00:12:16.497 "r_mbytes_per_sec": 0, 00:12:16.497 "w_mbytes_per_sec": 0 00:12:16.497 }, 00:12:16.497 "claimed": true, 00:12:16.497 "claim_type": "exclusive_write", 00:12:16.497 "zoned": false, 00:12:16.497 "supported_io_types": { 00:12:16.497 "read": true, 00:12:16.497 "write": true, 00:12:16.497 "unmap": true, 00:12:16.497 "flush": true, 00:12:16.497 "reset": true, 00:12:16.497 "nvme_admin": false, 00:12:16.497 "nvme_io": false, 00:12:16.497 "nvme_io_md": false, 00:12:16.497 "write_zeroes": true, 00:12:16.497 "zcopy": true, 00:12:16.497 "get_zone_info": false, 00:12:16.497 "zone_management": false, 00:12:16.497 "zone_append": false, 00:12:16.497 "compare": false, 00:12:16.497 "compare_and_write": false, 00:12:16.497 "abort": true, 00:12:16.497 "seek_hole": false, 00:12:16.497 "seek_data": false, 00:12:16.497 "copy": true, 00:12:16.497 "nvme_iov_md": false 00:12:16.497 }, 00:12:16.497 "memory_domains": [ 00:12:16.497 { 00:12:16.497 "dma_device_id": "system", 00:12:16.497 "dma_device_type": 1 00:12:16.497 }, 00:12:16.497 { 00:12:16.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.497 "dma_device_type": 2 00:12:16.497 } 00:12:16.497 ], 00:12:16.497 "driver_specific": {} 00:12:16.497 }' 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:16.497 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.755 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.755 "name": "BaseBdev2", 00:12:16.755 "aliases": [ 00:12:16.755 "b292d5e4-42d7-11ef-9ade-d5fc5159efa5" 00:12:16.755 ], 00:12:16.755 "product_name": "Malloc disk", 00:12:16.755 "block_size": 512, 00:12:16.755 "num_blocks": 65536, 00:12:16.755 "uuid": "b292d5e4-42d7-11ef-9ade-d5fc5159efa5", 00:12:16.755 "assigned_rate_limits": { 00:12:16.755 "rw_ios_per_sec": 0, 00:12:16.755 "rw_mbytes_per_sec": 0, 00:12:16.755 "r_mbytes_per_sec": 0, 00:12:16.755 "w_mbytes_per_sec": 0 00:12:16.755 }, 00:12:16.755 "claimed": true, 00:12:16.755 "claim_type": "exclusive_write", 00:12:16.755 "zoned": false, 00:12:16.755 "supported_io_types": { 00:12:16.755 "read": true, 00:12:16.755 "write": true, 00:12:16.755 "unmap": true, 00:12:16.755 "flush": true, 00:12:16.755 "reset": true, 00:12:16.755 "nvme_admin": false, 00:12:16.755 "nvme_io": false, 00:12:16.755 "nvme_io_md": false, 00:12:16.755 "write_zeroes": true, 00:12:16.755 "zcopy": true, 00:12:16.755 "get_zone_info": false, 00:12:16.755 "zone_management": false, 00:12:16.755 "zone_append": false, 00:12:16.755 "compare": false, 00:12:16.755 "compare_and_write": false, 00:12:16.755 "abort": true, 00:12:16.755 "seek_hole": false, 00:12:16.755 "seek_data": false, 00:12:16.755 "copy": true, 00:12:16.755 "nvme_iov_md": false 00:12:16.755 }, 00:12:16.755 "memory_domains": [ 00:12:16.755 { 00:12:16.755 "dma_device_id": "system", 00:12:16.755 "dma_device_type": 1 00:12:16.755 }, 00:12:16.755 { 00:12:16.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.755 "dma_device_type": 2 00:12:16.755 } 00:12:16.755 ], 00:12:16.755 "driver_specific": {} 00:12:16.755 }' 00:12:16.756 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.756 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.756 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:16.756 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.756 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.756 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:16.756 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.756 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.756 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:16.756 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.756 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.756 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:16.756 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.756 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:16.756 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.014 "name": "BaseBdev3", 00:12:17.014 "aliases": [ 00:12:17.014 "b373719f-42d7-11ef-9ade-d5fc5159efa5" 00:12:17.014 ], 00:12:17.014 "product_name": "Malloc disk", 00:12:17.014 "block_size": 512, 00:12:17.014 "num_blocks": 65536, 00:12:17.014 "uuid": "b373719f-42d7-11ef-9ade-d5fc5159efa5", 00:12:17.014 "assigned_rate_limits": { 00:12:17.014 "rw_ios_per_sec": 0, 00:12:17.014 "rw_mbytes_per_sec": 0, 00:12:17.014 "r_mbytes_per_sec": 0, 00:12:17.014 "w_mbytes_per_sec": 0 00:12:17.014 }, 00:12:17.014 "claimed": true, 00:12:17.014 "claim_type": "exclusive_write", 00:12:17.014 "zoned": false, 00:12:17.014 "supported_io_types": { 00:12:17.014 "read": true, 00:12:17.014 "write": true, 00:12:17.014 "unmap": true, 00:12:17.014 "flush": true, 00:12:17.014 "reset": true, 00:12:17.014 "nvme_admin": false, 00:12:17.014 "nvme_io": false, 00:12:17.014 "nvme_io_md": false, 00:12:17.014 "write_zeroes": true, 00:12:17.014 "zcopy": true, 00:12:17.014 "get_zone_info": false, 00:12:17.014 "zone_management": false, 00:12:17.014 "zone_append": false, 00:12:17.014 "compare": false, 00:12:17.014 "compare_and_write": false, 00:12:17.014 "abort": true, 00:12:17.014 "seek_hole": false, 00:12:17.014 "seek_data": false, 00:12:17.014 "copy": true, 00:12:17.014 "nvme_iov_md": false 00:12:17.014 }, 00:12:17.014 "memory_domains": [ 00:12:17.014 { 00:12:17.014 "dma_device_id": "system", 00:12:17.014 "dma_device_type": 1 00:12:17.014 }, 00:12:17.014 { 00:12:17.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.014 "dma_device_type": 2 00:12:17.014 } 00:12:17.014 ], 00:12:17.014 "driver_specific": {} 00:12:17.014 }' 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.014 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:17.580 [2024-07-15 18:26:09.670346] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.580 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.580 "name": "Existed_Raid", 00:12:17.580 "uuid": "b20674f6-42d7-11ef-9ade-d5fc5159efa5", 00:12:17.580 "strip_size_kb": 0, 00:12:17.580 "state": "online", 00:12:17.580 "raid_level": "raid1", 00:12:17.580 "superblock": true, 00:12:17.580 "num_base_bdevs": 3, 00:12:17.580 "num_base_bdevs_discovered": 2, 00:12:17.580 "num_base_bdevs_operational": 2, 00:12:17.580 "base_bdevs_list": [ 00:12:17.580 { 00:12:17.580 "name": null, 00:12:17.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.580 "is_configured": false, 00:12:17.580 "data_offset": 2048, 00:12:17.580 "data_size": 63488 00:12:17.580 }, 00:12:17.580 { 00:12:17.580 "name": "BaseBdev2", 00:12:17.580 "uuid": "b292d5e4-42d7-11ef-9ade-d5fc5159efa5", 00:12:17.580 "is_configured": true, 00:12:17.580 "data_offset": 2048, 00:12:17.580 "data_size": 63488 00:12:17.580 }, 00:12:17.580 { 00:12:17.580 "name": "BaseBdev3", 00:12:17.580 "uuid": "b373719f-42d7-11ef-9ade-d5fc5159efa5", 00:12:17.581 "is_configured": true, 00:12:17.581 "data_offset": 2048, 00:12:17.581 "data_size": 63488 00:12:17.581 } 00:12:17.581 ] 00:12:17.581 }' 00:12:17.581 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.581 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.147 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:18.147 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.147 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.147 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:18.147 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:18.147 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.147 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:18.406 [2024-07-15 18:26:10.688415] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.406 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:18.406 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.406 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.406 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:18.664 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:18.664 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.664 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:18.923 [2024-07-15 18:26:11.265033] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.923 [2024-07-15 18:26:11.265075] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.923 [2024-07-15 18:26:11.273321] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.923 [2024-07-15 18:26:11.273350] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.923 [2024-07-15 18:26:11.273355] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e2c7ba34a00 name Existed_Raid, state offline 00:12:18.923 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:18.923 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.923 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.923 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:19.181 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:19.181 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:19.181 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:19.182 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:19.182 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:19.182 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:19.440 BaseBdev2 00:12:19.440 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:19.440 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:19.440 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:19.440 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:19.440 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:19.440 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:19.440 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:19.700 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:19.958 [ 00:12:19.959 { 00:12:19.959 "name": "BaseBdev2", 00:12:19.959 "aliases": [ 00:12:19.959 "b66ed35f-42d7-11ef-9ade-d5fc5159efa5" 00:12:19.959 ], 00:12:19.959 "product_name": "Malloc disk", 00:12:19.959 "block_size": 512, 00:12:19.959 "num_blocks": 65536, 00:12:19.959 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:19.959 "assigned_rate_limits": { 00:12:19.959 "rw_ios_per_sec": 0, 00:12:19.959 "rw_mbytes_per_sec": 0, 00:12:19.959 "r_mbytes_per_sec": 0, 00:12:19.959 "w_mbytes_per_sec": 0 00:12:19.959 }, 00:12:19.959 "claimed": false, 00:12:19.959 "zoned": false, 00:12:19.959 "supported_io_types": { 00:12:19.959 "read": true, 00:12:19.959 "write": true, 00:12:19.959 "unmap": true, 00:12:19.959 "flush": true, 00:12:19.959 "reset": true, 00:12:19.959 "nvme_admin": false, 00:12:19.959 "nvme_io": false, 00:12:19.959 "nvme_io_md": false, 00:12:19.959 "write_zeroes": true, 00:12:19.959 "zcopy": true, 00:12:19.959 "get_zone_info": false, 00:12:19.959 "zone_management": false, 00:12:19.959 "zone_append": false, 00:12:19.959 "compare": false, 00:12:19.959 "compare_and_write": false, 00:12:19.959 "abort": true, 00:12:19.959 "seek_hole": false, 00:12:19.959 "seek_data": false, 00:12:19.959 "copy": true, 00:12:19.959 "nvme_iov_md": false 00:12:19.959 }, 00:12:19.959 "memory_domains": [ 00:12:19.959 { 00:12:19.959 "dma_device_id": "system", 00:12:19.959 "dma_device_type": 1 00:12:19.959 }, 00:12:19.959 { 00:12:19.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.959 "dma_device_type": 2 00:12:19.959 } 00:12:19.959 ], 00:12:19.959 "driver_specific": {} 00:12:19.959 } 00:12:19.959 ] 00:12:19.959 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:19.959 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:19.959 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:19.959 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.217 BaseBdev3 00:12:20.476 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:20.476 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:20.476 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:20.476 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:20.476 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:20.476 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:20.476 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:20.476 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.734 [ 00:12:20.734 { 00:12:20.734 "name": "BaseBdev3", 00:12:20.734 "aliases": [ 00:12:20.734 "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5" 00:12:20.734 ], 00:12:20.734 "product_name": "Malloc disk", 00:12:20.734 "block_size": 512, 00:12:20.734 "num_blocks": 65536, 00:12:20.734 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:20.734 "assigned_rate_limits": { 00:12:20.734 "rw_ios_per_sec": 0, 00:12:20.734 "rw_mbytes_per_sec": 0, 00:12:20.734 "r_mbytes_per_sec": 0, 00:12:20.734 "w_mbytes_per_sec": 0 00:12:20.734 }, 00:12:20.734 "claimed": false, 00:12:20.734 "zoned": false, 00:12:20.734 "supported_io_types": { 00:12:20.734 "read": true, 00:12:20.734 "write": true, 00:12:20.734 "unmap": true, 00:12:20.734 "flush": true, 00:12:20.734 "reset": true, 00:12:20.734 "nvme_admin": false, 00:12:20.734 "nvme_io": false, 00:12:20.734 "nvme_io_md": false, 00:12:20.734 "write_zeroes": true, 00:12:20.734 "zcopy": true, 00:12:20.734 "get_zone_info": false, 00:12:20.734 "zone_management": false, 00:12:20.734 "zone_append": false, 00:12:20.734 "compare": false, 00:12:20.734 "compare_and_write": false, 00:12:20.734 "abort": true, 00:12:20.734 "seek_hole": false, 00:12:20.734 "seek_data": false, 00:12:20.734 "copy": true, 00:12:20.734 "nvme_iov_md": false 00:12:20.734 }, 00:12:20.734 "memory_domains": [ 00:12:20.734 { 00:12:20.734 "dma_device_id": "system", 00:12:20.734 "dma_device_type": 1 00:12:20.734 }, 00:12:20.734 { 00:12:20.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.734 "dma_device_type": 2 00:12:20.734 } 00:12:20.734 ], 00:12:20.734 "driver_specific": {} 00:12:20.734 } 00:12:20.734 ] 00:12:20.734 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:20.734 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:20.734 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:20.734 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:20.993 [2024-07-15 18:26:13.313283] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.993 [2024-07-15 18:26:13.313337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.993 [2024-07-15 18:26:13.313347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.993 [2024-07-15 18:26:13.313942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.993 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.251 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.251 "name": "Existed_Raid", 00:12:21.251 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:21.251 "strip_size_kb": 0, 00:12:21.251 "state": "configuring", 00:12:21.251 "raid_level": "raid1", 00:12:21.251 "superblock": true, 00:12:21.251 "num_base_bdevs": 3, 00:12:21.251 "num_base_bdevs_discovered": 2, 00:12:21.251 "num_base_bdevs_operational": 3, 00:12:21.251 "base_bdevs_list": [ 00:12:21.251 { 00:12:21.251 "name": "BaseBdev1", 00:12:21.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.251 "is_configured": false, 00:12:21.251 "data_offset": 0, 00:12:21.251 "data_size": 0 00:12:21.251 }, 00:12:21.251 { 00:12:21.251 "name": "BaseBdev2", 00:12:21.251 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:21.251 "is_configured": true, 00:12:21.251 "data_offset": 2048, 00:12:21.251 "data_size": 63488 00:12:21.251 }, 00:12:21.251 { 00:12:21.251 "name": "BaseBdev3", 00:12:21.251 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:21.251 "is_configured": true, 00:12:21.251 "data_offset": 2048, 00:12:21.251 "data_size": 63488 00:12:21.251 } 00:12:21.251 ] 00:12:21.251 }' 00:12:21.251 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.251 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.816 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:21.816 [2024-07-15 18:26:14.181278] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.073 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.330 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:22.330 "name": "Existed_Raid", 00:12:22.330 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:22.330 "strip_size_kb": 0, 00:12:22.330 "state": "configuring", 00:12:22.330 "raid_level": "raid1", 00:12:22.330 "superblock": true, 00:12:22.330 "num_base_bdevs": 3, 00:12:22.330 "num_base_bdevs_discovered": 1, 00:12:22.330 "num_base_bdevs_operational": 3, 00:12:22.330 "base_bdevs_list": [ 00:12:22.330 { 00:12:22.330 "name": "BaseBdev1", 00:12:22.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.330 "is_configured": false, 00:12:22.330 "data_offset": 0, 00:12:22.330 "data_size": 0 00:12:22.330 }, 00:12:22.330 { 00:12:22.330 "name": null, 00:12:22.330 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:22.330 "is_configured": false, 00:12:22.330 "data_offset": 2048, 00:12:22.330 "data_size": 63488 00:12:22.330 }, 00:12:22.330 { 00:12:22.330 "name": "BaseBdev3", 00:12:22.330 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:22.330 "is_configured": true, 00:12:22.330 "data_offset": 2048, 00:12:22.330 "data_size": 63488 00:12:22.330 } 00:12:22.330 ] 00:12:22.330 }' 00:12:22.330 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:22.330 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.603 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.603 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.871 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:22.871 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.130 [2024-07-15 18:26:15.369449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.130 BaseBdev1 00:12:23.130 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:23.130 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:23.130 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:23.130 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:23.130 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:23.130 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:23.130 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:23.388 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.648 [ 00:12:23.648 { 00:12:23.648 "name": "BaseBdev1", 00:12:23.648 "aliases": [ 00:12:23.648 "b891179f-42d7-11ef-9ade-d5fc5159efa5" 00:12:23.648 ], 00:12:23.648 "product_name": "Malloc disk", 00:12:23.648 "block_size": 512, 00:12:23.648 "num_blocks": 65536, 00:12:23.648 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:23.648 "assigned_rate_limits": { 00:12:23.648 "rw_ios_per_sec": 0, 00:12:23.648 "rw_mbytes_per_sec": 0, 00:12:23.648 "r_mbytes_per_sec": 0, 00:12:23.648 "w_mbytes_per_sec": 0 00:12:23.648 }, 00:12:23.648 "claimed": true, 00:12:23.648 "claim_type": "exclusive_write", 00:12:23.648 "zoned": false, 00:12:23.648 "supported_io_types": { 00:12:23.648 "read": true, 00:12:23.648 "write": true, 00:12:23.648 "unmap": true, 00:12:23.648 "flush": true, 00:12:23.648 "reset": true, 00:12:23.648 "nvme_admin": false, 00:12:23.648 "nvme_io": false, 00:12:23.648 "nvme_io_md": false, 00:12:23.648 "write_zeroes": true, 00:12:23.648 "zcopy": true, 00:12:23.648 "get_zone_info": false, 00:12:23.648 "zone_management": false, 00:12:23.648 "zone_append": false, 00:12:23.648 "compare": false, 00:12:23.648 "compare_and_write": false, 00:12:23.648 "abort": true, 00:12:23.648 "seek_hole": false, 00:12:23.648 "seek_data": false, 00:12:23.648 "copy": true, 00:12:23.648 "nvme_iov_md": false 00:12:23.648 }, 00:12:23.648 "memory_domains": [ 00:12:23.648 { 00:12:23.648 "dma_device_id": "system", 00:12:23.648 "dma_device_type": 1 00:12:23.648 }, 00:12:23.648 { 00:12:23.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.648 "dma_device_type": 2 00:12:23.648 } 00:12:23.648 ], 00:12:23.648 "driver_specific": {} 00:12:23.648 } 00:12:23.648 ] 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.648 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.906 18:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.906 "name": "Existed_Raid", 00:12:23.906 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:23.906 "strip_size_kb": 0, 00:12:23.906 "state": "configuring", 00:12:23.906 "raid_level": "raid1", 00:12:23.906 "superblock": true, 00:12:23.906 "num_base_bdevs": 3, 00:12:23.906 "num_base_bdevs_discovered": 2, 00:12:23.906 "num_base_bdevs_operational": 3, 00:12:23.906 "base_bdevs_list": [ 00:12:23.906 { 00:12:23.906 "name": "BaseBdev1", 00:12:23.906 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:23.906 "is_configured": true, 00:12:23.906 "data_offset": 2048, 00:12:23.906 "data_size": 63488 00:12:23.906 }, 00:12:23.906 { 00:12:23.906 "name": null, 00:12:23.906 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:23.906 "is_configured": false, 00:12:23.906 "data_offset": 2048, 00:12:23.906 "data_size": 63488 00:12:23.906 }, 00:12:23.906 { 00:12:23.906 "name": "BaseBdev3", 00:12:23.906 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:23.906 "is_configured": true, 00:12:23.906 "data_offset": 2048, 00:12:23.906 "data_size": 63488 00:12:23.906 } 00:12:23.906 ] 00:12:23.906 }' 00:12:23.906 18:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.906 18:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.164 18:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.164 18:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:24.423 18:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:24.423 18:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:24.681 [2024-07-15 18:26:16.985311] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.681 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.939 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.939 "name": "Existed_Raid", 00:12:24.939 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:24.939 "strip_size_kb": 0, 00:12:24.939 "state": "configuring", 00:12:24.939 "raid_level": "raid1", 00:12:24.939 "superblock": true, 00:12:24.939 "num_base_bdevs": 3, 00:12:24.939 "num_base_bdevs_discovered": 1, 00:12:24.939 "num_base_bdevs_operational": 3, 00:12:24.939 "base_bdevs_list": [ 00:12:24.939 { 00:12:24.939 "name": "BaseBdev1", 00:12:24.939 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:24.939 "is_configured": true, 00:12:24.939 "data_offset": 2048, 00:12:24.939 "data_size": 63488 00:12:24.939 }, 00:12:24.939 { 00:12:24.939 "name": null, 00:12:24.939 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:24.939 "is_configured": false, 00:12:24.939 "data_offset": 2048, 00:12:24.939 "data_size": 63488 00:12:24.939 }, 00:12:24.939 { 00:12:24.939 "name": null, 00:12:24.939 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:24.939 "is_configured": false, 00:12:24.939 "data_offset": 2048, 00:12:24.939 "data_size": 63488 00:12:24.939 } 00:12:24.939 ] 00:12:24.939 }' 00:12:24.939 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.939 18:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.504 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.504 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:25.504 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:25.504 18:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:25.762 [2024-07-15 18:26:18.045327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.762 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.019 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.019 "name": "Existed_Raid", 00:12:26.019 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:26.019 "strip_size_kb": 0, 00:12:26.019 "state": "configuring", 00:12:26.019 "raid_level": "raid1", 00:12:26.019 "superblock": true, 00:12:26.019 "num_base_bdevs": 3, 00:12:26.019 "num_base_bdevs_discovered": 2, 00:12:26.019 "num_base_bdevs_operational": 3, 00:12:26.019 "base_bdevs_list": [ 00:12:26.019 { 00:12:26.019 "name": "BaseBdev1", 00:12:26.019 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:26.019 "is_configured": true, 00:12:26.019 "data_offset": 2048, 00:12:26.019 "data_size": 63488 00:12:26.019 }, 00:12:26.019 { 00:12:26.019 "name": null, 00:12:26.019 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:26.019 "is_configured": false, 00:12:26.019 "data_offset": 2048, 00:12:26.019 "data_size": 63488 00:12:26.019 }, 00:12:26.019 { 00:12:26.019 "name": "BaseBdev3", 00:12:26.019 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:26.020 "is_configured": true, 00:12:26.020 "data_offset": 2048, 00:12:26.020 "data_size": 63488 00:12:26.020 } 00:12:26.020 ] 00:12:26.020 }' 00:12:26.020 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.020 18:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.585 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.585 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.585 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:26.585 18:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:26.842 [2024-07-15 18:26:19.173357] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.842 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.099 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.099 "name": "Existed_Raid", 00:12:27.099 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:27.099 "strip_size_kb": 0, 00:12:27.099 "state": "configuring", 00:12:27.099 "raid_level": "raid1", 00:12:27.099 "superblock": true, 00:12:27.099 "num_base_bdevs": 3, 00:12:27.099 "num_base_bdevs_discovered": 1, 00:12:27.099 "num_base_bdevs_operational": 3, 00:12:27.099 "base_bdevs_list": [ 00:12:27.099 { 00:12:27.099 "name": null, 00:12:27.099 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:27.099 "is_configured": false, 00:12:27.099 "data_offset": 2048, 00:12:27.099 "data_size": 63488 00:12:27.099 }, 00:12:27.099 { 00:12:27.099 "name": null, 00:12:27.099 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:27.099 "is_configured": false, 00:12:27.099 "data_offset": 2048, 00:12:27.099 "data_size": 63488 00:12:27.099 }, 00:12:27.099 { 00:12:27.099 "name": "BaseBdev3", 00:12:27.099 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:27.099 "is_configured": true, 00:12:27.099 "data_offset": 2048, 00:12:27.099 "data_size": 63488 00:12:27.099 } 00:12:27.099 ] 00:12:27.099 }' 00:12:27.099 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.099 18:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.664 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.664 18:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:27.664 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:27.664 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:27.921 [2024-07-15 18:26:20.277563] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.921 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.922 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.922 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.922 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.180 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.180 "name": "Existed_Raid", 00:12:28.180 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:28.180 "strip_size_kb": 0, 00:12:28.180 "state": "configuring", 00:12:28.180 "raid_level": "raid1", 00:12:28.180 "superblock": true, 00:12:28.180 "num_base_bdevs": 3, 00:12:28.180 "num_base_bdevs_discovered": 2, 00:12:28.180 "num_base_bdevs_operational": 3, 00:12:28.180 "base_bdevs_list": [ 00:12:28.180 { 00:12:28.180 "name": null, 00:12:28.180 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:28.180 "is_configured": false, 00:12:28.180 "data_offset": 2048, 00:12:28.180 "data_size": 63488 00:12:28.180 }, 00:12:28.180 { 00:12:28.180 "name": "BaseBdev2", 00:12:28.180 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:28.180 "is_configured": true, 00:12:28.180 "data_offset": 2048, 00:12:28.180 "data_size": 63488 00:12:28.180 }, 00:12:28.180 { 00:12:28.180 "name": "BaseBdev3", 00:12:28.180 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:28.180 "is_configured": true, 00:12:28.180 "data_offset": 2048, 00:12:28.180 "data_size": 63488 00:12:28.180 } 00:12:28.180 ] 00:12:28.180 }' 00:12:28.180 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.180 18:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.747 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.747 18:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:28.747 18:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:28.747 18:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.747 18:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:29.005 18:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b891179f-42d7-11ef-9ade-d5fc5159efa5 00:12:29.263 [2024-07-15 18:26:21.589762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:29.263 [2024-07-15 18:26:21.589818] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e2c7ba34f00 00:12:29.263 [2024-07-15 18:26:21.589824] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.263 [2024-07-15 18:26:21.589845] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e2c7ba97e20 00:12:29.263 [2024-07-15 18:26:21.589898] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e2c7ba34f00 00:12:29.263 [2024-07-15 18:26:21.589902] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3e2c7ba34f00 00:12:29.263 [2024-07-15 18:26:21.589923] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.263 NewBaseBdev 00:12:29.263 18:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:29.263 18:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:29.263 18:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:29.264 18:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:29.264 18:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:29.264 18:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:29.264 18:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:29.522 18:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:29.781 [ 00:12:29.781 { 00:12:29.781 "name": "NewBaseBdev", 00:12:29.781 "aliases": [ 00:12:29.781 "b891179f-42d7-11ef-9ade-d5fc5159efa5" 00:12:29.781 ], 00:12:29.781 "product_name": "Malloc disk", 00:12:29.781 "block_size": 512, 00:12:29.781 "num_blocks": 65536, 00:12:29.781 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:29.781 "assigned_rate_limits": { 00:12:29.781 "rw_ios_per_sec": 0, 00:12:29.781 "rw_mbytes_per_sec": 0, 00:12:29.781 "r_mbytes_per_sec": 0, 00:12:29.781 "w_mbytes_per_sec": 0 00:12:29.781 }, 00:12:29.781 "claimed": true, 00:12:29.781 "claim_type": "exclusive_write", 00:12:29.781 "zoned": false, 00:12:29.781 "supported_io_types": { 00:12:29.781 "read": true, 00:12:29.781 "write": true, 00:12:29.781 "unmap": true, 00:12:29.781 "flush": true, 00:12:29.781 "reset": true, 00:12:29.781 "nvme_admin": false, 00:12:29.781 "nvme_io": false, 00:12:29.781 "nvme_io_md": false, 00:12:29.781 "write_zeroes": true, 00:12:29.781 "zcopy": true, 00:12:29.781 "get_zone_info": false, 00:12:29.781 "zone_management": false, 00:12:29.781 "zone_append": false, 00:12:29.781 "compare": false, 00:12:29.781 "compare_and_write": false, 00:12:29.781 "abort": true, 00:12:29.781 "seek_hole": false, 00:12:29.781 "seek_data": false, 00:12:29.781 "copy": true, 00:12:29.781 "nvme_iov_md": false 00:12:29.781 }, 00:12:29.781 "memory_domains": [ 00:12:29.781 { 00:12:29.781 "dma_device_id": "system", 00:12:29.781 "dma_device_type": 1 00:12:29.781 }, 00:12:29.781 { 00:12:29.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.781 "dma_device_type": 2 00:12:29.781 } 00:12:29.781 ], 00:12:29.781 "driver_specific": {} 00:12:29.781 } 00:12:29.781 ] 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.781 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.040 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:30.040 "name": "Existed_Raid", 00:12:30.040 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.040 "strip_size_kb": 0, 00:12:30.040 "state": "online", 00:12:30.040 "raid_level": "raid1", 00:12:30.040 "superblock": true, 00:12:30.040 "num_base_bdevs": 3, 00:12:30.040 "num_base_bdevs_discovered": 3, 00:12:30.040 "num_base_bdevs_operational": 3, 00:12:30.040 "base_bdevs_list": [ 00:12:30.040 { 00:12:30.040 "name": "NewBaseBdev", 00:12:30.040 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.040 "is_configured": true, 00:12:30.040 "data_offset": 2048, 00:12:30.040 "data_size": 63488 00:12:30.040 }, 00:12:30.040 { 00:12:30.040 "name": "BaseBdev2", 00:12:30.040 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.040 "is_configured": true, 00:12:30.040 "data_offset": 2048, 00:12:30.040 "data_size": 63488 00:12:30.040 }, 00:12:30.040 { 00:12:30.040 "name": "BaseBdev3", 00:12:30.040 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.041 "is_configured": true, 00:12:30.041 "data_offset": 2048, 00:12:30.041 "data_size": 63488 00:12:30.041 } 00:12:30.041 ] 00:12:30.041 }' 00:12:30.041 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:30.041 18:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.608 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:30.608 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:30.608 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:30.608 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:30.608 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:30.608 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:30.608 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:30.608 18:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:30.866 [2024-07-15 18:26:22.997770] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.867 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:30.867 "name": "Existed_Raid", 00:12:30.867 "aliases": [ 00:12:30.867 "b7575dd5-42d7-11ef-9ade-d5fc5159efa5" 00:12:30.867 ], 00:12:30.867 "product_name": "Raid Volume", 00:12:30.867 "block_size": 512, 00:12:30.867 "num_blocks": 63488, 00:12:30.867 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.867 "assigned_rate_limits": { 00:12:30.867 "rw_ios_per_sec": 0, 00:12:30.867 "rw_mbytes_per_sec": 0, 00:12:30.867 "r_mbytes_per_sec": 0, 00:12:30.867 "w_mbytes_per_sec": 0 00:12:30.867 }, 00:12:30.867 "claimed": false, 00:12:30.867 "zoned": false, 00:12:30.867 "supported_io_types": { 00:12:30.867 "read": true, 00:12:30.867 "write": true, 00:12:30.867 "unmap": false, 00:12:30.867 "flush": false, 00:12:30.867 "reset": true, 00:12:30.867 "nvme_admin": false, 00:12:30.867 "nvme_io": false, 00:12:30.867 "nvme_io_md": false, 00:12:30.867 "write_zeroes": true, 00:12:30.867 "zcopy": false, 00:12:30.867 "get_zone_info": false, 00:12:30.867 "zone_management": false, 00:12:30.867 "zone_append": false, 00:12:30.867 "compare": false, 00:12:30.867 "compare_and_write": false, 00:12:30.867 "abort": false, 00:12:30.867 "seek_hole": false, 00:12:30.867 "seek_data": false, 00:12:30.867 "copy": false, 00:12:30.867 "nvme_iov_md": false 00:12:30.867 }, 00:12:30.867 "memory_domains": [ 00:12:30.867 { 00:12:30.867 "dma_device_id": "system", 00:12:30.867 "dma_device_type": 1 00:12:30.867 }, 00:12:30.867 { 00:12:30.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.867 "dma_device_type": 2 00:12:30.867 }, 00:12:30.867 { 00:12:30.867 "dma_device_id": "system", 00:12:30.867 "dma_device_type": 1 00:12:30.867 }, 00:12:30.867 { 00:12:30.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.867 "dma_device_type": 2 00:12:30.867 }, 00:12:30.867 { 00:12:30.867 "dma_device_id": "system", 00:12:30.867 "dma_device_type": 1 00:12:30.867 }, 00:12:30.867 { 00:12:30.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.867 "dma_device_type": 2 00:12:30.867 } 00:12:30.867 ], 00:12:30.867 "driver_specific": { 00:12:30.867 "raid": { 00:12:30.867 "uuid": "b7575dd5-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.867 "strip_size_kb": 0, 00:12:30.867 "state": "online", 00:12:30.867 "raid_level": "raid1", 00:12:30.867 "superblock": true, 00:12:30.867 "num_base_bdevs": 3, 00:12:30.867 "num_base_bdevs_discovered": 3, 00:12:30.867 "num_base_bdevs_operational": 3, 00:12:30.867 "base_bdevs_list": [ 00:12:30.867 { 00:12:30.867 "name": "NewBaseBdev", 00:12:30.867 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.867 "is_configured": true, 00:12:30.867 "data_offset": 2048, 00:12:30.867 "data_size": 63488 00:12:30.867 }, 00:12:30.867 { 00:12:30.867 "name": "BaseBdev2", 00:12:30.867 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.867 "is_configured": true, 00:12:30.867 "data_offset": 2048, 00:12:30.867 "data_size": 63488 00:12:30.867 }, 00:12:30.867 { 00:12:30.867 "name": "BaseBdev3", 00:12:30.867 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:30.867 "is_configured": true, 00:12:30.867 "data_offset": 2048, 00:12:30.867 "data_size": 63488 00:12:30.867 } 00:12:30.867 ] 00:12:30.867 } 00:12:30.867 } 00:12:30.867 }' 00:12:30.867 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.867 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:30.867 BaseBdev2 00:12:30.867 BaseBdev3' 00:12:30.867 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:30.867 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:30.867 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:31.125 "name": "NewBaseBdev", 00:12:31.125 "aliases": [ 00:12:31.125 "b891179f-42d7-11ef-9ade-d5fc5159efa5" 00:12:31.125 ], 00:12:31.125 "product_name": "Malloc disk", 00:12:31.125 "block_size": 512, 00:12:31.125 "num_blocks": 65536, 00:12:31.125 "uuid": "b891179f-42d7-11ef-9ade-d5fc5159efa5", 00:12:31.125 "assigned_rate_limits": { 00:12:31.125 "rw_ios_per_sec": 0, 00:12:31.125 "rw_mbytes_per_sec": 0, 00:12:31.125 "r_mbytes_per_sec": 0, 00:12:31.125 "w_mbytes_per_sec": 0 00:12:31.125 }, 00:12:31.125 "claimed": true, 00:12:31.125 "claim_type": "exclusive_write", 00:12:31.125 "zoned": false, 00:12:31.125 "supported_io_types": { 00:12:31.125 "read": true, 00:12:31.125 "write": true, 00:12:31.125 "unmap": true, 00:12:31.125 "flush": true, 00:12:31.125 "reset": true, 00:12:31.125 "nvme_admin": false, 00:12:31.125 "nvme_io": false, 00:12:31.125 "nvme_io_md": false, 00:12:31.125 "write_zeroes": true, 00:12:31.125 "zcopy": true, 00:12:31.125 "get_zone_info": false, 00:12:31.125 "zone_management": false, 00:12:31.125 "zone_append": false, 00:12:31.125 "compare": false, 00:12:31.125 "compare_and_write": false, 00:12:31.125 "abort": true, 00:12:31.125 "seek_hole": false, 00:12:31.125 "seek_data": false, 00:12:31.125 "copy": true, 00:12:31.125 "nvme_iov_md": false 00:12:31.125 }, 00:12:31.125 "memory_domains": [ 00:12:31.125 { 00:12:31.125 "dma_device_id": "system", 00:12:31.125 "dma_device_type": 1 00:12:31.125 }, 00:12:31.125 { 00:12:31.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.125 "dma_device_type": 2 00:12:31.125 } 00:12:31.125 ], 00:12:31.125 "driver_specific": {} 00:12:31.125 }' 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:31.125 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:31.383 "name": "BaseBdev2", 00:12:31.383 "aliases": [ 00:12:31.383 "b66ed35f-42d7-11ef-9ade-d5fc5159efa5" 00:12:31.383 ], 00:12:31.383 "product_name": "Malloc disk", 00:12:31.383 "block_size": 512, 00:12:31.383 "num_blocks": 65536, 00:12:31.383 "uuid": "b66ed35f-42d7-11ef-9ade-d5fc5159efa5", 00:12:31.383 "assigned_rate_limits": { 00:12:31.383 "rw_ios_per_sec": 0, 00:12:31.383 "rw_mbytes_per_sec": 0, 00:12:31.383 "r_mbytes_per_sec": 0, 00:12:31.383 "w_mbytes_per_sec": 0 00:12:31.383 }, 00:12:31.383 "claimed": true, 00:12:31.383 "claim_type": "exclusive_write", 00:12:31.383 "zoned": false, 00:12:31.383 "supported_io_types": { 00:12:31.383 "read": true, 00:12:31.383 "write": true, 00:12:31.383 "unmap": true, 00:12:31.383 "flush": true, 00:12:31.383 "reset": true, 00:12:31.383 "nvme_admin": false, 00:12:31.383 "nvme_io": false, 00:12:31.383 "nvme_io_md": false, 00:12:31.383 "write_zeroes": true, 00:12:31.383 "zcopy": true, 00:12:31.383 "get_zone_info": false, 00:12:31.383 "zone_management": false, 00:12:31.383 "zone_append": false, 00:12:31.383 "compare": false, 00:12:31.383 "compare_and_write": false, 00:12:31.383 "abort": true, 00:12:31.383 "seek_hole": false, 00:12:31.383 "seek_data": false, 00:12:31.383 "copy": true, 00:12:31.383 "nvme_iov_md": false 00:12:31.383 }, 00:12:31.383 "memory_domains": [ 00:12:31.383 { 00:12:31.383 "dma_device_id": "system", 00:12:31.383 "dma_device_type": 1 00:12:31.383 }, 00:12:31.383 { 00:12:31.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.383 "dma_device_type": 2 00:12:31.383 } 00:12:31.383 ], 00:12:31.383 "driver_specific": {} 00:12:31.383 }' 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:31.383 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:31.642 "name": "BaseBdev3", 00:12:31.642 "aliases": [ 00:12:31.642 "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5" 00:12:31.642 ], 00:12:31.642 "product_name": "Malloc disk", 00:12:31.642 "block_size": 512, 00:12:31.642 "num_blocks": 65536, 00:12:31.642 "uuid": "b6e8e4f5-42d7-11ef-9ade-d5fc5159efa5", 00:12:31.642 "assigned_rate_limits": { 00:12:31.642 "rw_ios_per_sec": 0, 00:12:31.642 "rw_mbytes_per_sec": 0, 00:12:31.642 "r_mbytes_per_sec": 0, 00:12:31.642 "w_mbytes_per_sec": 0 00:12:31.642 }, 00:12:31.642 "claimed": true, 00:12:31.642 "claim_type": "exclusive_write", 00:12:31.642 "zoned": false, 00:12:31.642 "supported_io_types": { 00:12:31.642 "read": true, 00:12:31.642 "write": true, 00:12:31.642 "unmap": true, 00:12:31.642 "flush": true, 00:12:31.642 "reset": true, 00:12:31.642 "nvme_admin": false, 00:12:31.642 "nvme_io": false, 00:12:31.642 "nvme_io_md": false, 00:12:31.642 "write_zeroes": true, 00:12:31.642 "zcopy": true, 00:12:31.642 "get_zone_info": false, 00:12:31.642 "zone_management": false, 00:12:31.642 "zone_append": false, 00:12:31.642 "compare": false, 00:12:31.642 "compare_and_write": false, 00:12:31.642 "abort": true, 00:12:31.642 "seek_hole": false, 00:12:31.642 "seek_data": false, 00:12:31.642 "copy": true, 00:12:31.642 "nvme_iov_md": false 00:12:31.642 }, 00:12:31.642 "memory_domains": [ 00:12:31.642 { 00:12:31.642 "dma_device_id": "system", 00:12:31.642 "dma_device_type": 1 00:12:31.642 }, 00:12:31.642 { 00:12:31.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.642 "dma_device_type": 2 00:12:31.642 } 00:12:31.642 ], 00:12:31.642 "driver_specific": {} 00:12:31.642 }' 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:31.642 18:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:31.901 [2024-07-15 18:26:24.249831] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.902 [2024-07-15 18:26:24.249857] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.902 [2024-07-15 18:26:24.249880] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.902 [2024-07-15 18:26:24.249958] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.902 [2024-07-15 18:26:24.249973] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e2c7ba34f00 name Existed_Raid, state offline 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 56857 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 56857 ']' 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 56857 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 56857 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:31.902 killing process with pid 56857 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56857' 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 56857 00:12:31.902 [2024-07-15 18:26:24.275912] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.902 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 56857 00:12:32.161 [2024-07-15 18:26:24.299080] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.161 18:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:32.161 00:12:32.161 real 0m24.657s 00:12:32.161 user 0m45.133s 00:12:32.161 sys 0m3.286s 00:12:32.161 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.161 18:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.161 ************************************ 00:12:32.161 END TEST raid_state_function_test_sb 00:12:32.161 ************************************ 00:12:32.420 18:26:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:32.420 18:26:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:32.420 18:26:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:32.420 18:26:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.420 18:26:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.420 ************************************ 00:12:32.420 START TEST raid_superblock_test 00:12:32.420 ************************************ 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=57585 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 57585 /var/tmp/spdk-raid.sock 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 57585 ']' 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.420 18:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.420 [2024-07-15 18:26:24.584836] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:32.420 [2024-07-15 18:26:24.585002] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:32.988 EAL: TSC is not safe to use in SMP mode 00:12:32.988 EAL: TSC is not invariant 00:12:32.988 [2024-07-15 18:26:25.185885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.988 [2024-07-15 18:26:25.293962] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:32.988 [2024-07-15 18:26:25.296196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.988 [2024-07-15 18:26:25.297036] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.988 [2024-07-15 18:26:25.297054] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:33.559 malloc1 00:12:33.559 18:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:33.845 [2024-07-15 18:26:26.177486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:33.845 [2024-07-15 18:26:26.177550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.845 [2024-07-15 18:26:26.177564] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d034780 00:12:33.845 [2024-07-15 18:26:26.177573] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.845 [2024-07-15 18:26:26.178540] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.845 [2024-07-15 18:26:26.178569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:33.845 pt1 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:33.845 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:34.412 malloc2 00:12:34.412 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:34.671 [2024-07-15 18:26:26.813538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:34.671 [2024-07-15 18:26:26.813605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.671 [2024-07-15 18:26:26.813618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d034c80 00:12:34.671 [2024-07-15 18:26:26.813627] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.671 [2024-07-15 18:26:26.814339] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.671 [2024-07-15 18:26:26.814368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:34.671 pt2 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:34.671 18:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:34.929 malloc3 00:12:34.929 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:35.188 [2024-07-15 18:26:27.353581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:35.188 [2024-07-15 18:26:27.353647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.188 [2024-07-15 18:26:27.353661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d035180 00:12:35.188 [2024-07-15 18:26:27.353670] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.188 [2024-07-15 18:26:27.354394] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.188 [2024-07-15 18:26:27.354424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:35.188 pt3 00:12:35.188 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:35.188 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:35.188 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:35.447 [2024-07-15 18:26:27.601604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:35.447 [2024-07-15 18:26:27.602221] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:35.447 [2024-07-15 18:26:27.602245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:35.447 [2024-07-15 18:26:27.602299] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3388d035400 00:12:35.447 [2024-07-15 18:26:27.602306] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.447 [2024-07-15 18:26:27.602341] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3388d097e20 00:12:35.447 [2024-07-15 18:26:27.602433] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3388d035400 00:12:35.447 [2024-07-15 18:26:27.602439] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3388d035400 00:12:35.447 [2024-07-15 18:26:27.602469] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.447 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.705 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.705 "name": "raid_bdev1", 00:12:35.705 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:35.705 "strip_size_kb": 0, 00:12:35.705 "state": "online", 00:12:35.705 "raid_level": "raid1", 00:12:35.705 "superblock": true, 00:12:35.705 "num_base_bdevs": 3, 00:12:35.705 "num_base_bdevs_discovered": 3, 00:12:35.705 "num_base_bdevs_operational": 3, 00:12:35.705 "base_bdevs_list": [ 00:12:35.705 { 00:12:35.705 "name": "pt1", 00:12:35.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:35.705 "is_configured": true, 00:12:35.705 "data_offset": 2048, 00:12:35.705 "data_size": 63488 00:12:35.705 }, 00:12:35.705 { 00:12:35.705 "name": "pt2", 00:12:35.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.705 "is_configured": true, 00:12:35.705 "data_offset": 2048, 00:12:35.705 "data_size": 63488 00:12:35.705 }, 00:12:35.705 { 00:12:35.705 "name": "pt3", 00:12:35.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.705 "is_configured": true, 00:12:35.705 "data_offset": 2048, 00:12:35.705 "data_size": 63488 00:12:35.705 } 00:12:35.706 ] 00:12:35.706 }' 00:12:35.706 18:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.706 18:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.964 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:35.964 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:35.964 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:35.964 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:35.964 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:35.964 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:35.964 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:35.964 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:36.223 [2024-07-15 18:26:28.561719] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.223 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:36.223 "name": "raid_bdev1", 00:12:36.223 "aliases": [ 00:12:36.223 "bfdb972d-42d7-11ef-9ade-d5fc5159efa5" 00:12:36.223 ], 00:12:36.223 "product_name": "Raid Volume", 00:12:36.223 "block_size": 512, 00:12:36.223 "num_blocks": 63488, 00:12:36.223 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:36.223 "assigned_rate_limits": { 00:12:36.223 "rw_ios_per_sec": 0, 00:12:36.223 "rw_mbytes_per_sec": 0, 00:12:36.223 "r_mbytes_per_sec": 0, 00:12:36.223 "w_mbytes_per_sec": 0 00:12:36.223 }, 00:12:36.223 "claimed": false, 00:12:36.223 "zoned": false, 00:12:36.223 "supported_io_types": { 00:12:36.223 "read": true, 00:12:36.223 "write": true, 00:12:36.223 "unmap": false, 00:12:36.223 "flush": false, 00:12:36.223 "reset": true, 00:12:36.223 "nvme_admin": false, 00:12:36.223 "nvme_io": false, 00:12:36.223 "nvme_io_md": false, 00:12:36.223 "write_zeroes": true, 00:12:36.223 "zcopy": false, 00:12:36.223 "get_zone_info": false, 00:12:36.223 "zone_management": false, 00:12:36.223 "zone_append": false, 00:12:36.223 "compare": false, 00:12:36.223 "compare_and_write": false, 00:12:36.223 "abort": false, 00:12:36.223 "seek_hole": false, 00:12:36.223 "seek_data": false, 00:12:36.223 "copy": false, 00:12:36.223 "nvme_iov_md": false 00:12:36.223 }, 00:12:36.223 "memory_domains": [ 00:12:36.223 { 00:12:36.223 "dma_device_id": "system", 00:12:36.223 "dma_device_type": 1 00:12:36.223 }, 00:12:36.223 { 00:12:36.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.223 "dma_device_type": 2 00:12:36.223 }, 00:12:36.223 { 00:12:36.223 "dma_device_id": "system", 00:12:36.223 "dma_device_type": 1 00:12:36.223 }, 00:12:36.223 { 00:12:36.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.223 "dma_device_type": 2 00:12:36.223 }, 00:12:36.223 { 00:12:36.223 "dma_device_id": "system", 00:12:36.223 "dma_device_type": 1 00:12:36.223 }, 00:12:36.223 { 00:12:36.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.223 "dma_device_type": 2 00:12:36.223 } 00:12:36.223 ], 00:12:36.223 "driver_specific": { 00:12:36.223 "raid": { 00:12:36.223 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:36.223 "strip_size_kb": 0, 00:12:36.223 "state": "online", 00:12:36.223 "raid_level": "raid1", 00:12:36.223 "superblock": true, 00:12:36.223 "num_base_bdevs": 3, 00:12:36.223 "num_base_bdevs_discovered": 3, 00:12:36.223 "num_base_bdevs_operational": 3, 00:12:36.223 "base_bdevs_list": [ 00:12:36.223 { 00:12:36.223 "name": "pt1", 00:12:36.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.223 "is_configured": true, 00:12:36.223 "data_offset": 2048, 00:12:36.223 "data_size": 63488 00:12:36.223 }, 00:12:36.223 { 00:12:36.223 "name": "pt2", 00:12:36.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.223 "is_configured": true, 00:12:36.223 "data_offset": 2048, 00:12:36.223 "data_size": 63488 00:12:36.223 }, 00:12:36.223 { 00:12:36.223 "name": "pt3", 00:12:36.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.223 "is_configured": true, 00:12:36.223 "data_offset": 2048, 00:12:36.223 "data_size": 63488 00:12:36.223 } 00:12:36.223 ] 00:12:36.223 } 00:12:36.223 } 00:12:36.223 }' 00:12:36.223 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.223 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:36.223 pt2 00:12:36.223 pt3' 00:12:36.223 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:36.223 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:36.223 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:36.790 "name": "pt1", 00:12:36.790 "aliases": [ 00:12:36.790 "00000000-0000-0000-0000-000000000001" 00:12:36.790 ], 00:12:36.790 "product_name": "passthru", 00:12:36.790 "block_size": 512, 00:12:36.790 "num_blocks": 65536, 00:12:36.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.790 "assigned_rate_limits": { 00:12:36.790 "rw_ios_per_sec": 0, 00:12:36.790 "rw_mbytes_per_sec": 0, 00:12:36.790 "r_mbytes_per_sec": 0, 00:12:36.790 "w_mbytes_per_sec": 0 00:12:36.790 }, 00:12:36.790 "claimed": true, 00:12:36.790 "claim_type": "exclusive_write", 00:12:36.790 "zoned": false, 00:12:36.790 "supported_io_types": { 00:12:36.790 "read": true, 00:12:36.790 "write": true, 00:12:36.790 "unmap": true, 00:12:36.790 "flush": true, 00:12:36.790 "reset": true, 00:12:36.790 "nvme_admin": false, 00:12:36.790 "nvme_io": false, 00:12:36.790 "nvme_io_md": false, 00:12:36.790 "write_zeroes": true, 00:12:36.790 "zcopy": true, 00:12:36.790 "get_zone_info": false, 00:12:36.790 "zone_management": false, 00:12:36.790 "zone_append": false, 00:12:36.790 "compare": false, 00:12:36.790 "compare_and_write": false, 00:12:36.790 "abort": true, 00:12:36.790 "seek_hole": false, 00:12:36.790 "seek_data": false, 00:12:36.790 "copy": true, 00:12:36.790 "nvme_iov_md": false 00:12:36.790 }, 00:12:36.790 "memory_domains": [ 00:12:36.790 { 00:12:36.790 "dma_device_id": "system", 00:12:36.790 "dma_device_type": 1 00:12:36.790 }, 00:12:36.790 { 00:12:36.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.790 "dma_device_type": 2 00:12:36.790 } 00:12:36.790 ], 00:12:36.790 "driver_specific": { 00:12:36.790 "passthru": { 00:12:36.790 "name": "pt1", 00:12:36.790 "base_bdev_name": "malloc1" 00:12:36.790 } 00:12:36.790 } 00:12:36.790 }' 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:36.790 18:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:37.049 "name": "pt2", 00:12:37.049 "aliases": [ 00:12:37.049 "00000000-0000-0000-0000-000000000002" 00:12:37.049 ], 00:12:37.049 "product_name": "passthru", 00:12:37.049 "block_size": 512, 00:12:37.049 "num_blocks": 65536, 00:12:37.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.049 "assigned_rate_limits": { 00:12:37.049 "rw_ios_per_sec": 0, 00:12:37.049 "rw_mbytes_per_sec": 0, 00:12:37.049 "r_mbytes_per_sec": 0, 00:12:37.049 "w_mbytes_per_sec": 0 00:12:37.049 }, 00:12:37.049 "claimed": true, 00:12:37.049 "claim_type": "exclusive_write", 00:12:37.049 "zoned": false, 00:12:37.049 "supported_io_types": { 00:12:37.049 "read": true, 00:12:37.049 "write": true, 00:12:37.049 "unmap": true, 00:12:37.049 "flush": true, 00:12:37.049 "reset": true, 00:12:37.049 "nvme_admin": false, 00:12:37.049 "nvme_io": false, 00:12:37.049 "nvme_io_md": false, 00:12:37.049 "write_zeroes": true, 00:12:37.049 "zcopy": true, 00:12:37.049 "get_zone_info": false, 00:12:37.049 "zone_management": false, 00:12:37.049 "zone_append": false, 00:12:37.049 "compare": false, 00:12:37.049 "compare_and_write": false, 00:12:37.049 "abort": true, 00:12:37.049 "seek_hole": false, 00:12:37.049 "seek_data": false, 00:12:37.049 "copy": true, 00:12:37.049 "nvme_iov_md": false 00:12:37.049 }, 00:12:37.049 "memory_domains": [ 00:12:37.049 { 00:12:37.049 "dma_device_id": "system", 00:12:37.049 "dma_device_type": 1 00:12:37.049 }, 00:12:37.049 { 00:12:37.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.049 "dma_device_type": 2 00:12:37.049 } 00:12:37.049 ], 00:12:37.049 "driver_specific": { 00:12:37.049 "passthru": { 00:12:37.049 "name": "pt2", 00:12:37.049 "base_bdev_name": "malloc2" 00:12:37.049 } 00:12:37.049 } 00:12:37.049 }' 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.049 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.050 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.050 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.050 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.050 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.050 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.050 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:37.050 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:37.309 "name": "pt3", 00:12:37.309 "aliases": [ 00:12:37.309 "00000000-0000-0000-0000-000000000003" 00:12:37.309 ], 00:12:37.309 "product_name": "passthru", 00:12:37.309 "block_size": 512, 00:12:37.309 "num_blocks": 65536, 00:12:37.309 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.309 "assigned_rate_limits": { 00:12:37.309 "rw_ios_per_sec": 0, 00:12:37.309 "rw_mbytes_per_sec": 0, 00:12:37.309 "r_mbytes_per_sec": 0, 00:12:37.309 "w_mbytes_per_sec": 0 00:12:37.309 }, 00:12:37.309 "claimed": true, 00:12:37.309 "claim_type": "exclusive_write", 00:12:37.309 "zoned": false, 00:12:37.309 "supported_io_types": { 00:12:37.309 "read": true, 00:12:37.309 "write": true, 00:12:37.309 "unmap": true, 00:12:37.309 "flush": true, 00:12:37.309 "reset": true, 00:12:37.309 "nvme_admin": false, 00:12:37.309 "nvme_io": false, 00:12:37.309 "nvme_io_md": false, 00:12:37.309 "write_zeroes": true, 00:12:37.309 "zcopy": true, 00:12:37.309 "get_zone_info": false, 00:12:37.309 "zone_management": false, 00:12:37.309 "zone_append": false, 00:12:37.309 "compare": false, 00:12:37.309 "compare_and_write": false, 00:12:37.309 "abort": true, 00:12:37.309 "seek_hole": false, 00:12:37.309 "seek_data": false, 00:12:37.309 "copy": true, 00:12:37.309 "nvme_iov_md": false 00:12:37.309 }, 00:12:37.309 "memory_domains": [ 00:12:37.309 { 00:12:37.309 "dma_device_id": "system", 00:12:37.309 "dma_device_type": 1 00:12:37.309 }, 00:12:37.309 { 00:12:37.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.309 "dma_device_type": 2 00:12:37.309 } 00:12:37.309 ], 00:12:37.309 "driver_specific": { 00:12:37.309 "passthru": { 00:12:37.309 "name": "pt3", 00:12:37.309 "base_bdev_name": "malloc3" 00:12:37.309 } 00:12:37.309 } 00:12:37.309 }' 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:37.309 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:37.567 [2024-07-15 18:26:29.877823] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.567 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=bfdb972d-42d7-11ef-9ade-d5fc5159efa5 00:12:37.567 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z bfdb972d-42d7-11ef-9ade-d5fc5159efa5 ']' 00:12:37.567 18:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:37.826 [2024-07-15 18:26:30.109778] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.826 [2024-07-15 18:26:30.109806] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.826 [2024-07-15 18:26:30.109846] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.826 [2024-07-15 18:26:30.109864] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.826 [2024-07-15 18:26:30.109868] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3388d035400 name raid_bdev1, state offline 00:12:37.826 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.826 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:38.085 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:38.085 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:38.085 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:38.085 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:38.344 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:38.344 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:38.603 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:38.603 18:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:38.862 18:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:38.862 18:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:39.121 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:39.380 [2024-07-15 18:26:31.705930] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:39.380 [2024-07-15 18:26:31.706564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:39.380 [2024-07-15 18:26:31.706582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:39.380 [2024-07-15 18:26:31.706598] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:39.380 [2024-07-15 18:26:31.706636] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:39.380 [2024-07-15 18:26:31.706649] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:39.380 [2024-07-15 18:26:31.706658] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.380 [2024-07-15 18:26:31.706662] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3388d035180 name raid_bdev1, state configuring 00:12:39.380 request: 00:12:39.380 { 00:12:39.380 "name": "raid_bdev1", 00:12:39.380 "raid_level": "raid1", 00:12:39.380 "base_bdevs": [ 00:12:39.380 "malloc1", 00:12:39.380 "malloc2", 00:12:39.380 "malloc3" 00:12:39.380 ], 00:12:39.380 "superblock": false, 00:12:39.380 "method": "bdev_raid_create", 00:12:39.380 "req_id": 1 00:12:39.380 } 00:12:39.380 Got JSON-RPC error response 00:12:39.380 response: 00:12:39.380 { 00:12:39.380 "code": -17, 00:12:39.380 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:39.380 } 00:12:39.380 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:39.380 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.380 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.380 18:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.380 18:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:39.380 18:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:39.948 [2024-07-15 18:26:32.301971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:39.948 [2024-07-15 18:26:32.302030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.948 [2024-07-15 18:26:32.302043] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d034c80 00:12:39.948 [2024-07-15 18:26:32.302051] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.948 [2024-07-15 18:26:32.302748] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.948 [2024-07-15 18:26:32.302774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:39.948 [2024-07-15 18:26:32.302800] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:39.948 [2024-07-15 18:26:32.302812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:39.948 pt1 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.948 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.515 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:40.515 "name": "raid_bdev1", 00:12:40.515 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:40.515 "strip_size_kb": 0, 00:12:40.515 "state": "configuring", 00:12:40.515 "raid_level": "raid1", 00:12:40.515 "superblock": true, 00:12:40.515 "num_base_bdevs": 3, 00:12:40.515 "num_base_bdevs_discovered": 1, 00:12:40.515 "num_base_bdevs_operational": 3, 00:12:40.515 "base_bdevs_list": [ 00:12:40.515 { 00:12:40.515 "name": "pt1", 00:12:40.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.515 "is_configured": true, 00:12:40.515 "data_offset": 2048, 00:12:40.515 "data_size": 63488 00:12:40.515 }, 00:12:40.515 { 00:12:40.515 "name": null, 00:12:40.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.515 "is_configured": false, 00:12:40.515 "data_offset": 2048, 00:12:40.515 "data_size": 63488 00:12:40.515 }, 00:12:40.515 { 00:12:40.515 "name": null, 00:12:40.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.515 "is_configured": false, 00:12:40.515 "data_offset": 2048, 00:12:40.515 "data_size": 63488 00:12:40.515 } 00:12:40.515 ] 00:12:40.515 }' 00:12:40.515 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:40.516 18:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.775 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:40.775 18:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:41.036 [2024-07-15 18:26:33.266058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:41.036 [2024-07-15 18:26:33.266136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.036 [2024-07-15 18:26:33.266157] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d035680 00:12:41.036 [2024-07-15 18:26:33.266171] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.036 [2024-07-15 18:26:33.266322] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.036 [2024-07-15 18:26:33.266339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:41.036 [2024-07-15 18:26:33.266372] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:41.036 [2024-07-15 18:26:33.266383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.036 pt2 00:12:41.036 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:41.299 [2024-07-15 18:26:33.574083] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.299 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.557 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.557 "name": "raid_bdev1", 00:12:41.557 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:41.557 "strip_size_kb": 0, 00:12:41.557 "state": "configuring", 00:12:41.557 "raid_level": "raid1", 00:12:41.557 "superblock": true, 00:12:41.557 "num_base_bdevs": 3, 00:12:41.557 "num_base_bdevs_discovered": 1, 00:12:41.557 "num_base_bdevs_operational": 3, 00:12:41.557 "base_bdevs_list": [ 00:12:41.557 { 00:12:41.557 "name": "pt1", 00:12:41.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.557 "is_configured": true, 00:12:41.557 "data_offset": 2048, 00:12:41.557 "data_size": 63488 00:12:41.557 }, 00:12:41.557 { 00:12:41.557 "name": null, 00:12:41.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.557 "is_configured": false, 00:12:41.557 "data_offset": 2048, 00:12:41.557 "data_size": 63488 00:12:41.557 }, 00:12:41.557 { 00:12:41.557 "name": null, 00:12:41.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.557 "is_configured": false, 00:12:41.557 "data_offset": 2048, 00:12:41.557 "data_size": 63488 00:12:41.557 } 00:12:41.557 ] 00:12:41.557 }' 00:12:41.557 18:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.557 18:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.125 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:42.125 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:42.125 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:42.384 [2024-07-15 18:26:34.534150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:42.384 [2024-07-15 18:26:34.534226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.384 [2024-07-15 18:26:34.534256] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d035680 00:12:42.384 [2024-07-15 18:26:34.534265] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.384 [2024-07-15 18:26:34.534391] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.384 [2024-07-15 18:26:34.534403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:42.384 [2024-07-15 18:26:34.534428] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:42.384 [2024-07-15 18:26:34.534436] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:42.384 pt2 00:12:42.384 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:42.384 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:42.384 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:42.643 [2024-07-15 18:26:34.834175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:42.643 [2024-07-15 18:26:34.834231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.643 [2024-07-15 18:26:34.834243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d035400 00:12:42.643 [2024-07-15 18:26:34.834252] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.643 [2024-07-15 18:26:34.834393] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.643 [2024-07-15 18:26:34.834405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:42.643 [2024-07-15 18:26:34.834429] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:42.643 [2024-07-15 18:26:34.834438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:42.643 [2024-07-15 18:26:34.834476] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3388d034780 00:12:42.643 [2024-07-15 18:26:34.834480] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.643 [2024-07-15 18:26:34.834502] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3388d097e20 00:12:42.643 [2024-07-15 18:26:34.834559] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3388d034780 00:12:42.643 [2024-07-15 18:26:34.834564] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3388d034780 00:12:42.643 [2024-07-15 18:26:34.834585] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.643 pt3 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.643 18:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.909 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.909 "name": "raid_bdev1", 00:12:42.909 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:42.909 "strip_size_kb": 0, 00:12:42.909 "state": "online", 00:12:42.909 "raid_level": "raid1", 00:12:42.909 "superblock": true, 00:12:42.909 "num_base_bdevs": 3, 00:12:42.909 "num_base_bdevs_discovered": 3, 00:12:42.909 "num_base_bdevs_operational": 3, 00:12:42.909 "base_bdevs_list": [ 00:12:42.909 { 00:12:42.909 "name": "pt1", 00:12:42.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.909 "is_configured": true, 00:12:42.909 "data_offset": 2048, 00:12:42.909 "data_size": 63488 00:12:42.909 }, 00:12:42.909 { 00:12:42.909 "name": "pt2", 00:12:42.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.909 "is_configured": true, 00:12:42.909 "data_offset": 2048, 00:12:42.909 "data_size": 63488 00:12:42.909 }, 00:12:42.909 { 00:12:42.909 "name": "pt3", 00:12:42.909 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.909 "is_configured": true, 00:12:42.910 "data_offset": 2048, 00:12:42.910 "data_size": 63488 00:12:42.910 } 00:12:42.910 ] 00:12:42.910 }' 00:12:42.910 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.910 18:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.179 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:43.179 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:43.179 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:43.179 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:43.179 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:43.179 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:43.179 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:43.179 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:43.437 [2024-07-15 18:26:35.802298] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.696 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:43.696 "name": "raid_bdev1", 00:12:43.696 "aliases": [ 00:12:43.696 "bfdb972d-42d7-11ef-9ade-d5fc5159efa5" 00:12:43.696 ], 00:12:43.696 "product_name": "Raid Volume", 00:12:43.696 "block_size": 512, 00:12:43.696 "num_blocks": 63488, 00:12:43.696 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:43.696 "assigned_rate_limits": { 00:12:43.696 "rw_ios_per_sec": 0, 00:12:43.696 "rw_mbytes_per_sec": 0, 00:12:43.696 "r_mbytes_per_sec": 0, 00:12:43.696 "w_mbytes_per_sec": 0 00:12:43.696 }, 00:12:43.696 "claimed": false, 00:12:43.696 "zoned": false, 00:12:43.696 "supported_io_types": { 00:12:43.696 "read": true, 00:12:43.696 "write": true, 00:12:43.696 "unmap": false, 00:12:43.696 "flush": false, 00:12:43.696 "reset": true, 00:12:43.696 "nvme_admin": false, 00:12:43.696 "nvme_io": false, 00:12:43.696 "nvme_io_md": false, 00:12:43.696 "write_zeroes": true, 00:12:43.696 "zcopy": false, 00:12:43.696 "get_zone_info": false, 00:12:43.696 "zone_management": false, 00:12:43.696 "zone_append": false, 00:12:43.696 "compare": false, 00:12:43.696 "compare_and_write": false, 00:12:43.696 "abort": false, 00:12:43.696 "seek_hole": false, 00:12:43.696 "seek_data": false, 00:12:43.696 "copy": false, 00:12:43.696 "nvme_iov_md": false 00:12:43.696 }, 00:12:43.696 "memory_domains": [ 00:12:43.696 { 00:12:43.696 "dma_device_id": "system", 00:12:43.696 "dma_device_type": 1 00:12:43.696 }, 00:12:43.696 { 00:12:43.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.696 "dma_device_type": 2 00:12:43.696 }, 00:12:43.696 { 00:12:43.696 "dma_device_id": "system", 00:12:43.696 "dma_device_type": 1 00:12:43.696 }, 00:12:43.696 { 00:12:43.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.696 "dma_device_type": 2 00:12:43.696 }, 00:12:43.696 { 00:12:43.696 "dma_device_id": "system", 00:12:43.696 "dma_device_type": 1 00:12:43.696 }, 00:12:43.696 { 00:12:43.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.696 "dma_device_type": 2 00:12:43.696 } 00:12:43.696 ], 00:12:43.696 "driver_specific": { 00:12:43.696 "raid": { 00:12:43.696 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:43.696 "strip_size_kb": 0, 00:12:43.696 "state": "online", 00:12:43.696 "raid_level": "raid1", 00:12:43.696 "superblock": true, 00:12:43.696 "num_base_bdevs": 3, 00:12:43.696 "num_base_bdevs_discovered": 3, 00:12:43.696 "num_base_bdevs_operational": 3, 00:12:43.696 "base_bdevs_list": [ 00:12:43.696 { 00:12:43.696 "name": "pt1", 00:12:43.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.696 "is_configured": true, 00:12:43.696 "data_offset": 2048, 00:12:43.696 "data_size": 63488 00:12:43.696 }, 00:12:43.696 { 00:12:43.696 "name": "pt2", 00:12:43.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.696 "is_configured": true, 00:12:43.696 "data_offset": 2048, 00:12:43.696 "data_size": 63488 00:12:43.696 }, 00:12:43.696 { 00:12:43.696 "name": "pt3", 00:12:43.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.696 "is_configured": true, 00:12:43.696 "data_offset": 2048, 00:12:43.696 "data_size": 63488 00:12:43.696 } 00:12:43.696 ] 00:12:43.696 } 00:12:43.696 } 00:12:43.696 }' 00:12:43.696 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:43.696 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:43.696 pt2 00:12:43.696 pt3' 00:12:43.696 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.696 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:43.696 18:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.956 "name": "pt1", 00:12:43.956 "aliases": [ 00:12:43.956 "00000000-0000-0000-0000-000000000001" 00:12:43.956 ], 00:12:43.956 "product_name": "passthru", 00:12:43.956 "block_size": 512, 00:12:43.956 "num_blocks": 65536, 00:12:43.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.956 "assigned_rate_limits": { 00:12:43.956 "rw_ios_per_sec": 0, 00:12:43.956 "rw_mbytes_per_sec": 0, 00:12:43.956 "r_mbytes_per_sec": 0, 00:12:43.956 "w_mbytes_per_sec": 0 00:12:43.956 }, 00:12:43.956 "claimed": true, 00:12:43.956 "claim_type": "exclusive_write", 00:12:43.956 "zoned": false, 00:12:43.956 "supported_io_types": { 00:12:43.956 "read": true, 00:12:43.956 "write": true, 00:12:43.956 "unmap": true, 00:12:43.956 "flush": true, 00:12:43.956 "reset": true, 00:12:43.956 "nvme_admin": false, 00:12:43.956 "nvme_io": false, 00:12:43.956 "nvme_io_md": false, 00:12:43.956 "write_zeroes": true, 00:12:43.956 "zcopy": true, 00:12:43.956 "get_zone_info": false, 00:12:43.956 "zone_management": false, 00:12:43.956 "zone_append": false, 00:12:43.956 "compare": false, 00:12:43.956 "compare_and_write": false, 00:12:43.956 "abort": true, 00:12:43.956 "seek_hole": false, 00:12:43.956 "seek_data": false, 00:12:43.956 "copy": true, 00:12:43.956 "nvme_iov_md": false 00:12:43.956 }, 00:12:43.956 "memory_domains": [ 00:12:43.956 { 00:12:43.956 "dma_device_id": "system", 00:12:43.956 "dma_device_type": 1 00:12:43.956 }, 00:12:43.956 { 00:12:43.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.956 "dma_device_type": 2 00:12:43.956 } 00:12:43.956 ], 00:12:43.956 "driver_specific": { 00:12:43.956 "passthru": { 00:12:43.956 "name": "pt1", 00:12:43.956 "base_bdev_name": "malloc1" 00:12:43.956 } 00:12:43.956 } 00:12:43.956 }' 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:43.956 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.215 "name": "pt2", 00:12:44.215 "aliases": [ 00:12:44.215 "00000000-0000-0000-0000-000000000002" 00:12:44.215 ], 00:12:44.215 "product_name": "passthru", 00:12:44.215 "block_size": 512, 00:12:44.215 "num_blocks": 65536, 00:12:44.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.215 "assigned_rate_limits": { 00:12:44.215 "rw_ios_per_sec": 0, 00:12:44.215 "rw_mbytes_per_sec": 0, 00:12:44.215 "r_mbytes_per_sec": 0, 00:12:44.215 "w_mbytes_per_sec": 0 00:12:44.215 }, 00:12:44.215 "claimed": true, 00:12:44.215 "claim_type": "exclusive_write", 00:12:44.215 "zoned": false, 00:12:44.215 "supported_io_types": { 00:12:44.215 "read": true, 00:12:44.215 "write": true, 00:12:44.215 "unmap": true, 00:12:44.215 "flush": true, 00:12:44.215 "reset": true, 00:12:44.215 "nvme_admin": false, 00:12:44.215 "nvme_io": false, 00:12:44.215 "nvme_io_md": false, 00:12:44.215 "write_zeroes": true, 00:12:44.215 "zcopy": true, 00:12:44.215 "get_zone_info": false, 00:12:44.215 "zone_management": false, 00:12:44.215 "zone_append": false, 00:12:44.215 "compare": false, 00:12:44.215 "compare_and_write": false, 00:12:44.215 "abort": true, 00:12:44.215 "seek_hole": false, 00:12:44.215 "seek_data": false, 00:12:44.215 "copy": true, 00:12:44.215 "nvme_iov_md": false 00:12:44.215 }, 00:12:44.215 "memory_domains": [ 00:12:44.215 { 00:12:44.215 "dma_device_id": "system", 00:12:44.215 "dma_device_type": 1 00:12:44.215 }, 00:12:44.215 { 00:12:44.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.215 "dma_device_type": 2 00:12:44.215 } 00:12:44.215 ], 00:12:44.215 "driver_specific": { 00:12:44.215 "passthru": { 00:12:44.215 "name": "pt2", 00:12:44.215 "base_bdev_name": "malloc2" 00:12:44.215 } 00:12:44.215 } 00:12:44.215 }' 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:44.215 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.473 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.473 "name": "pt3", 00:12:44.473 "aliases": [ 00:12:44.473 "00000000-0000-0000-0000-000000000003" 00:12:44.473 ], 00:12:44.473 "product_name": "passthru", 00:12:44.473 "block_size": 512, 00:12:44.473 "num_blocks": 65536, 00:12:44.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.473 "assigned_rate_limits": { 00:12:44.473 "rw_ios_per_sec": 0, 00:12:44.473 "rw_mbytes_per_sec": 0, 00:12:44.473 "r_mbytes_per_sec": 0, 00:12:44.473 "w_mbytes_per_sec": 0 00:12:44.473 }, 00:12:44.473 "claimed": true, 00:12:44.473 "claim_type": "exclusive_write", 00:12:44.473 "zoned": false, 00:12:44.473 "supported_io_types": { 00:12:44.473 "read": true, 00:12:44.473 "write": true, 00:12:44.473 "unmap": true, 00:12:44.473 "flush": true, 00:12:44.473 "reset": true, 00:12:44.473 "nvme_admin": false, 00:12:44.473 "nvme_io": false, 00:12:44.473 "nvme_io_md": false, 00:12:44.473 "write_zeroes": true, 00:12:44.473 "zcopy": true, 00:12:44.473 "get_zone_info": false, 00:12:44.473 "zone_management": false, 00:12:44.473 "zone_append": false, 00:12:44.473 "compare": false, 00:12:44.473 "compare_and_write": false, 00:12:44.473 "abort": true, 00:12:44.473 "seek_hole": false, 00:12:44.473 "seek_data": false, 00:12:44.473 "copy": true, 00:12:44.473 "nvme_iov_md": false 00:12:44.473 }, 00:12:44.473 "memory_domains": [ 00:12:44.473 { 00:12:44.473 "dma_device_id": "system", 00:12:44.473 "dma_device_type": 1 00:12:44.473 }, 00:12:44.473 { 00:12:44.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.473 "dma_device_type": 2 00:12:44.473 } 00:12:44.473 ], 00:12:44.473 "driver_specific": { 00:12:44.473 "passthru": { 00:12:44.473 "name": "pt3", 00:12:44.473 "base_bdev_name": "malloc3" 00:12:44.473 } 00:12:44.473 } 00:12:44.473 }' 00:12:44.473 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.473 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.473 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.473 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.473 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:44.732 18:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:44.993 [2024-07-15 18:26:37.154394] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.993 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' bfdb972d-42d7-11ef-9ade-d5fc5159efa5 '!=' bfdb972d-42d7-11ef-9ade-d5fc5159efa5 ']' 00:12:44.993 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:12:44.993 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:44.993 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:44.993 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:45.251 [2024-07-15 18:26:37.422396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.251 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.508 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.508 "name": "raid_bdev1", 00:12:45.508 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:45.508 "strip_size_kb": 0, 00:12:45.508 "state": "online", 00:12:45.508 "raid_level": "raid1", 00:12:45.508 "superblock": true, 00:12:45.508 "num_base_bdevs": 3, 00:12:45.508 "num_base_bdevs_discovered": 2, 00:12:45.508 "num_base_bdevs_operational": 2, 00:12:45.508 "base_bdevs_list": [ 00:12:45.508 { 00:12:45.508 "name": null, 00:12:45.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.508 "is_configured": false, 00:12:45.508 "data_offset": 2048, 00:12:45.508 "data_size": 63488 00:12:45.508 }, 00:12:45.508 { 00:12:45.508 "name": "pt2", 00:12:45.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.508 "is_configured": true, 00:12:45.508 "data_offset": 2048, 00:12:45.508 "data_size": 63488 00:12:45.508 }, 00:12:45.508 { 00:12:45.508 "name": "pt3", 00:12:45.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.509 "is_configured": true, 00:12:45.509 "data_offset": 2048, 00:12:45.509 "data_size": 63488 00:12:45.509 } 00:12:45.509 ] 00:12:45.509 }' 00:12:45.509 18:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.509 18:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.073 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:46.073 [2024-07-15 18:26:38.422464] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.073 [2024-07-15 18:26:38.422498] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.073 [2024-07-15 18:26:38.422522] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.073 [2024-07-15 18:26:38.422538] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.073 [2024-07-15 18:26:38.422543] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3388d034780 name raid_bdev1, state offline 00:12:46.073 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:12:46.073 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.331 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:12:46.331 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:12:46.331 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:12:46.331 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:46.331 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:46.588 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:46.588 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:46.588 18:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:46.848 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:46.848 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:46.848 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:12:46.848 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:46.848 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:47.110 [2024-07-15 18:26:39.426553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:47.110 [2024-07-15 18:26:39.426619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.110 [2024-07-15 18:26:39.426633] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d035400 00:12:47.110 [2024-07-15 18:26:39.426641] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.110 [2024-07-15 18:26:39.427394] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.110 [2024-07-15 18:26:39.427437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:47.110 [2024-07-15 18:26:39.427481] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:47.110 [2024-07-15 18:26:39.427502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.110 pt2 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.110 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.368 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.368 "name": "raid_bdev1", 00:12:47.368 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:47.368 "strip_size_kb": 0, 00:12:47.368 "state": "configuring", 00:12:47.368 "raid_level": "raid1", 00:12:47.368 "superblock": true, 00:12:47.368 "num_base_bdevs": 3, 00:12:47.368 "num_base_bdevs_discovered": 1, 00:12:47.368 "num_base_bdevs_operational": 2, 00:12:47.368 "base_bdevs_list": [ 00:12:47.368 { 00:12:47.368 "name": null, 00:12:47.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.368 "is_configured": false, 00:12:47.368 "data_offset": 2048, 00:12:47.368 "data_size": 63488 00:12:47.368 }, 00:12:47.368 { 00:12:47.368 "name": "pt2", 00:12:47.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.368 "is_configured": true, 00:12:47.368 "data_offset": 2048, 00:12:47.368 "data_size": 63488 00:12:47.368 }, 00:12:47.368 { 00:12:47.368 "name": null, 00:12:47.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.368 "is_configured": false, 00:12:47.368 "data_offset": 2048, 00:12:47.368 "data_size": 63488 00:12:47.368 } 00:12:47.368 ] 00:12:47.368 }' 00:12:47.368 18:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.368 18:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:47.935 [2024-07-15 18:26:40.294620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:47.935 [2024-07-15 18:26:40.294686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.935 [2024-07-15 18:26:40.294699] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d034780 00:12:47.935 [2024-07-15 18:26:40.294708] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.935 [2024-07-15 18:26:40.294838] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.935 [2024-07-15 18:26:40.294850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:47.935 [2024-07-15 18:26:40.294876] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:47.935 [2024-07-15 18:26:40.294885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:47.935 [2024-07-15 18:26:40.294913] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3388d035180 00:12:47.935 [2024-07-15 18:26:40.294918] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.935 [2024-07-15 18:26:40.294939] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3388d097e20 00:12:47.935 [2024-07-15 18:26:40.294989] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3388d035180 00:12:47.935 [2024-07-15 18:26:40.294994] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3388d035180 00:12:47.935 [2024-07-15 18:26:40.295016] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.935 pt3 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.935 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.193 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.193 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.193 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.193 "name": "raid_bdev1", 00:12:48.193 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:48.193 "strip_size_kb": 0, 00:12:48.193 "state": "online", 00:12:48.193 "raid_level": "raid1", 00:12:48.193 "superblock": true, 00:12:48.193 "num_base_bdevs": 3, 00:12:48.193 "num_base_bdevs_discovered": 2, 00:12:48.193 "num_base_bdevs_operational": 2, 00:12:48.193 "base_bdevs_list": [ 00:12:48.193 { 00:12:48.193 "name": null, 00:12:48.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.193 "is_configured": false, 00:12:48.193 "data_offset": 2048, 00:12:48.193 "data_size": 63488 00:12:48.193 }, 00:12:48.193 { 00:12:48.193 "name": "pt2", 00:12:48.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.193 "is_configured": true, 00:12:48.193 "data_offset": 2048, 00:12:48.193 "data_size": 63488 00:12:48.193 }, 00:12:48.193 { 00:12:48.193 "name": "pt3", 00:12:48.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.193 "is_configured": true, 00:12:48.193 "data_offset": 2048, 00:12:48.193 "data_size": 63488 00:12:48.193 } 00:12:48.193 ] 00:12:48.193 }' 00:12:48.193 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.193 18:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 18:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:49.033 [2024-07-15 18:26:41.194719] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.033 [2024-07-15 18:26:41.194751] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.033 [2024-07-15 18:26:41.194777] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.033 [2024-07-15 18:26:41.194792] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.033 [2024-07-15 18:26:41.194796] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3388d035180 name raid_bdev1, state offline 00:12:49.033 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.033 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:12:49.291 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:12:49.291 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:12:49.291 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:12:49.291 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:12:49.291 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:49.549 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:49.807 [2024-07-15 18:26:41.950783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:49.807 [2024-07-15 18:26:41.950847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.807 [2024-07-15 18:26:41.950860] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d034780 00:12:49.807 [2024-07-15 18:26:41.950869] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.807 [2024-07-15 18:26:41.951586] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.807 [2024-07-15 18:26:41.951617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:49.807 [2024-07-15 18:26:41.951644] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:49.807 [2024-07-15 18:26:41.951656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:49.807 [2024-07-15 18:26:41.951687] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:49.807 [2024-07-15 18:26:41.951692] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.807 [2024-07-15 18:26:41.951697] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3388d035180 name raid_bdev1, state configuring 00:12:49.807 [2024-07-15 18:26:41.951705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:49.807 pt1 00:12:49.807 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:12:49.807 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:49.807 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:49.807 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.808 18:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.066 18:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.066 "name": "raid_bdev1", 00:12:50.066 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:50.066 "strip_size_kb": 0, 00:12:50.066 "state": "configuring", 00:12:50.066 "raid_level": "raid1", 00:12:50.066 "superblock": true, 00:12:50.066 "num_base_bdevs": 3, 00:12:50.066 "num_base_bdevs_discovered": 1, 00:12:50.066 "num_base_bdevs_operational": 2, 00:12:50.066 "base_bdevs_list": [ 00:12:50.066 { 00:12:50.066 "name": null, 00:12:50.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.066 "is_configured": false, 00:12:50.066 "data_offset": 2048, 00:12:50.066 "data_size": 63488 00:12:50.066 }, 00:12:50.066 { 00:12:50.066 "name": "pt2", 00:12:50.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.066 "is_configured": true, 00:12:50.066 "data_offset": 2048, 00:12:50.066 "data_size": 63488 00:12:50.066 }, 00:12:50.066 { 00:12:50.066 "name": null, 00:12:50.066 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:50.066 "is_configured": false, 00:12:50.066 "data_offset": 2048, 00:12:50.066 "data_size": 63488 00:12:50.066 } 00:12:50.066 ] 00:12:50.066 }' 00:12:50.066 18:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.066 18:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.325 18:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:12:50.325 18:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:50.584 18:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:12:50.584 18:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:50.843 [2024-07-15 18:26:43.138890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:50.843 [2024-07-15 18:26:43.138955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.843 [2024-07-15 18:26:43.138968] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3388d034c80 00:12:50.843 [2024-07-15 18:26:43.138976] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.843 [2024-07-15 18:26:43.139112] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.843 [2024-07-15 18:26:43.139137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:50.843 [2024-07-15 18:26:43.139161] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:50.843 [2024-07-15 18:26:43.139171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:50.843 [2024-07-15 18:26:43.139199] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3388d035180 00:12:50.843 [2024-07-15 18:26:43.139204] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.843 [2024-07-15 18:26:43.139225] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3388d097e20 00:12:50.843 [2024-07-15 18:26:43.139276] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3388d035180 00:12:50.843 [2024-07-15 18:26:43.139281] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3388d035180 00:12:50.843 [2024-07-15 18:26:43.139302] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.843 pt3 00:12:50.843 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.844 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.106 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.106 "name": "raid_bdev1", 00:12:51.106 "uuid": "bfdb972d-42d7-11ef-9ade-d5fc5159efa5", 00:12:51.106 "strip_size_kb": 0, 00:12:51.106 "state": "online", 00:12:51.106 "raid_level": "raid1", 00:12:51.106 "superblock": true, 00:12:51.106 "num_base_bdevs": 3, 00:12:51.106 "num_base_bdevs_discovered": 2, 00:12:51.106 "num_base_bdevs_operational": 2, 00:12:51.106 "base_bdevs_list": [ 00:12:51.106 { 00:12:51.106 "name": null, 00:12:51.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.106 "is_configured": false, 00:12:51.106 "data_offset": 2048, 00:12:51.106 "data_size": 63488 00:12:51.106 }, 00:12:51.106 { 00:12:51.106 "name": "pt2", 00:12:51.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:51.106 "is_configured": true, 00:12:51.106 "data_offset": 2048, 00:12:51.106 "data_size": 63488 00:12:51.106 }, 00:12:51.106 { 00:12:51.106 "name": "pt3", 00:12:51.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:51.106 "is_configured": true, 00:12:51.106 "data_offset": 2048, 00:12:51.106 "data_size": 63488 00:12:51.106 } 00:12:51.106 ] 00:12:51.106 }' 00:12:51.106 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.106 18:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.418 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:51.418 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:51.702 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:12:51.702 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:51.702 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:12:51.960 [2024-07-15 18:26:44.243052] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' bfdb972d-42d7-11ef-9ade-d5fc5159efa5 '!=' bfdb972d-42d7-11ef-9ade-d5fc5159efa5 ']' 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 57585 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 57585 ']' 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 57585 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 57585 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:51.960 killing process with pid 57585 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57585' 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 57585 00:12:51.960 [2024-07-15 18:26:44.274736] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.960 [2024-07-15 18:26:44.274766] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.960 [2024-07-15 18:26:44.274782] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.960 [2024-07-15 18:26:44.274787] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3388d035180 name raid_bdev1, state offline 00:12:51.960 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 57585 00:12:51.960 [2024-07-15 18:26:44.299321] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.219 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:52.219 00:12:52.219 real 0m19.953s 00:12:52.219 user 0m36.281s 00:12:52.219 sys 0m2.752s 00:12:52.219 ************************************ 00:12:52.219 END TEST raid_superblock_test 00:12:52.219 ************************************ 00:12:52.219 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:52.220 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.220 18:26:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:52.220 18:26:44 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:52.220 18:26:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:52.220 18:26:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.220 18:26:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.220 ************************************ 00:12:52.220 START TEST raid_read_error_test 00:12:52.220 ************************************ 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ArMZPk9nS8 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58143 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58143 /var/tmp/spdk-raid.sock 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 58143 ']' 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.220 18:26:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.220 [2024-07-15 18:26:44.596144] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:52.220 [2024-07-15 18:26:44.596414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:53.155 EAL: TSC is not safe to use in SMP mode 00:12:53.155 EAL: TSC is not invariant 00:12:53.155 [2024-07-15 18:26:45.185378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.155 [2024-07-15 18:26:45.296025] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:53.155 [2024-07-15 18:26:45.298133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.155 [2024-07-15 18:26:45.298919] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.155 [2024-07-15 18:26:45.298934] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.414 18:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.414 18:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:53.414 18:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:53.414 18:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.673 BaseBdev1_malloc 00:12:53.673 18:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:53.931 true 00:12:53.931 18:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:54.189 [2024-07-15 18:26:46.483090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:54.189 [2024-07-15 18:26:46.483153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.189 [2024-07-15 18:26:46.483182] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad751434780 00:12:54.189 [2024-07-15 18:26:46.483190] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.189 [2024-07-15 18:26:46.483869] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.189 [2024-07-15 18:26:46.483896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:54.189 BaseBdev1 00:12:54.189 18:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:54.189 18:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:54.446 BaseBdev2_malloc 00:12:54.446 18:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:54.705 true 00:12:54.705 18:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:54.968 [2024-07-15 18:26:47.315155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:54.968 [2024-07-15 18:26:47.315217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.968 [2024-07-15 18:26:47.315246] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad751434c80 00:12:54.968 [2024-07-15 18:26:47.315256] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.968 [2024-07-15 18:26:47.315949] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.968 [2024-07-15 18:26:47.315981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.968 BaseBdev2 00:12:54.968 18:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:54.968 18:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:55.542 BaseBdev3_malloc 00:12:55.543 18:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:55.543 true 00:12:55.801 18:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:56.059 [2024-07-15 18:26:48.227226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:56.059 [2024-07-15 18:26:48.227298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.059 [2024-07-15 18:26:48.227327] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad751435180 00:12:56.059 [2024-07-15 18:26:48.227336] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.059 [2024-07-15 18:26:48.228037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.059 [2024-07-15 18:26:48.228069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:56.059 BaseBdev3 00:12:56.059 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:56.318 [2024-07-15 18:26:48.539263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.318 [2024-07-15 18:26:48.539863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.318 [2024-07-15 18:26:48.539890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.318 [2024-07-15 18:26:48.539950] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ad751435400 00:12:56.318 [2024-07-15 18:26:48.539957] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.318 [2024-07-15 18:26:48.539990] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ad7514a0e20 00:12:56.318 [2024-07-15 18:26:48.540073] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ad751435400 00:12:56.318 [2024-07-15 18:26:48.540077] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ad751435400 00:12:56.318 [2024-07-15 18:26:48.540106] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.318 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.576 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.576 "name": "raid_bdev1", 00:12:56.576 "uuid": "cc566c98-42d7-11ef-9ade-d5fc5159efa5", 00:12:56.576 "strip_size_kb": 0, 00:12:56.576 "state": "online", 00:12:56.576 "raid_level": "raid1", 00:12:56.576 "superblock": true, 00:12:56.576 "num_base_bdevs": 3, 00:12:56.576 "num_base_bdevs_discovered": 3, 00:12:56.576 "num_base_bdevs_operational": 3, 00:12:56.576 "base_bdevs_list": [ 00:12:56.576 { 00:12:56.576 "name": "BaseBdev1", 00:12:56.576 "uuid": "ed9c929a-9402-a55d-af46-1a47ea198210", 00:12:56.577 "is_configured": true, 00:12:56.577 "data_offset": 2048, 00:12:56.577 "data_size": 63488 00:12:56.577 }, 00:12:56.577 { 00:12:56.577 "name": "BaseBdev2", 00:12:56.577 "uuid": "7f4c505c-9b9b-2955-885a-ea87174f910d", 00:12:56.577 "is_configured": true, 00:12:56.577 "data_offset": 2048, 00:12:56.577 "data_size": 63488 00:12:56.577 }, 00:12:56.577 { 00:12:56.577 "name": "BaseBdev3", 00:12:56.577 "uuid": "22e1d3f3-5211-2854-958f-140a1b7cd956", 00:12:56.577 "is_configured": true, 00:12:56.577 "data_offset": 2048, 00:12:56.577 "data_size": 63488 00:12:56.577 } 00:12:56.577 ] 00:12:56.577 }' 00:12:56.577 18:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.577 18:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.836 18:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:56.836 18:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:57.095 [2024-07-15 18:26:49.315543] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ad7514a0ec0 00:12:58.028 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.286 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.545 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.545 "name": "raid_bdev1", 00:12:58.545 "uuid": "cc566c98-42d7-11ef-9ade-d5fc5159efa5", 00:12:58.545 "strip_size_kb": 0, 00:12:58.545 "state": "online", 00:12:58.545 "raid_level": "raid1", 00:12:58.545 "superblock": true, 00:12:58.545 "num_base_bdevs": 3, 00:12:58.545 "num_base_bdevs_discovered": 3, 00:12:58.545 "num_base_bdevs_operational": 3, 00:12:58.545 "base_bdevs_list": [ 00:12:58.545 { 00:12:58.545 "name": "BaseBdev1", 00:12:58.545 "uuid": "ed9c929a-9402-a55d-af46-1a47ea198210", 00:12:58.545 "is_configured": true, 00:12:58.545 "data_offset": 2048, 00:12:58.545 "data_size": 63488 00:12:58.545 }, 00:12:58.545 { 00:12:58.545 "name": "BaseBdev2", 00:12:58.545 "uuid": "7f4c505c-9b9b-2955-885a-ea87174f910d", 00:12:58.545 "is_configured": true, 00:12:58.545 "data_offset": 2048, 00:12:58.545 "data_size": 63488 00:12:58.545 }, 00:12:58.545 { 00:12:58.545 "name": "BaseBdev3", 00:12:58.545 "uuid": "22e1d3f3-5211-2854-958f-140a1b7cd956", 00:12:58.545 "is_configured": true, 00:12:58.545 "data_offset": 2048, 00:12:58.545 "data_size": 63488 00:12:58.545 } 00:12:58.545 ] 00:12:58.545 }' 00:12:58.545 18:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.545 18:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.139 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:59.139 [2024-07-15 18:26:51.494408] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.139 [2024-07-15 18:26:51.494458] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.139 [2024-07-15 18:26:51.494983] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.139 [2024-07-15 18:26:51.494995] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.139 [2024-07-15 18:26:51.495011] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.139 [2024-07-15 18:26:51.495015] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ad751435400 name raid_bdev1, state offline 00:12:59.139 0 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58143 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 58143 ']' 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 58143 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58143 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58143' 00:12:59.410 killing process with pid 58143 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 58143 00:12:59.410 [2024-07-15 18:26:51.526635] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.410 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 58143 00:12:59.410 [2024-07-15 18:26:51.559601] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ArMZPk9nS8 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:59.669 00:12:59.669 real 0m7.243s 00:12:59.669 user 0m11.505s 00:12:59.669 sys 0m1.144s 00:12:59.669 ************************************ 00:12:59.669 END TEST raid_read_error_test 00:12:59.669 ************************************ 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.669 18:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.669 18:26:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:59.669 18:26:51 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:59.669 18:26:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:59.669 18:26:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.669 18:26:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.669 ************************************ 00:12:59.669 START TEST raid_write_error_test 00:12:59.669 ************************************ 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:59.669 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.m4ZcrOEvFJ 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58278 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58278 /var/tmp/spdk-raid.sock 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 58278 ']' 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.670 18:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.670 [2024-07-15 18:26:51.883187] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:59.670 [2024-07-15 18:26:51.883413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:00.236 EAL: TSC is not safe to use in SMP mode 00:13:00.236 EAL: TSC is not invariant 00:13:00.236 [2024-07-15 18:26:52.555393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.494 [2024-07-15 18:26:52.685194] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:00.494 [2024-07-15 18:26:52.687666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.494 [2024-07-15 18:26:52.688601] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.494 [2024-07-15 18:26:52.688619] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.753 18:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.753 18:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:00.753 18:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:00.753 18:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:01.010 BaseBdev1_malloc 00:13:01.011 18:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:01.268 true 00:13:01.268 18:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:01.526 [2024-07-15 18:26:53.834560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:01.526 [2024-07-15 18:26:53.834649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.526 [2024-07-15 18:26:53.834680] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2e736dc34780 00:13:01.526 [2024-07-15 18:26:53.834689] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.526 [2024-07-15 18:26:53.835361] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.526 [2024-07-15 18:26:53.835384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.526 BaseBdev1 00:13:01.526 18:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:01.526 18:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.784 BaseBdev2_malloc 00:13:01.784 18:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:02.350 true 00:13:02.350 18:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:02.350 [2024-07-15 18:26:54.710607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:02.350 [2024-07-15 18:26:54.710668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.350 [2024-07-15 18:26:54.710695] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2e736dc34c80 00:13:02.350 [2024-07-15 18:26:54.710704] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.350 [2024-07-15 18:26:54.711378] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.350 [2024-07-15 18:26:54.711410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.350 BaseBdev2 00:13:02.350 18:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:02.350 18:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:02.914 BaseBdev3_malloc 00:13:02.914 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:02.914 true 00:13:02.914 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:03.171 [2024-07-15 18:26:55.506669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:03.171 [2024-07-15 18:26:55.506773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.171 [2024-07-15 18:26:55.506802] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2e736dc35180 00:13:03.171 [2024-07-15 18:26:55.506816] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.171 [2024-07-15 18:26:55.507475] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.171 [2024-07-15 18:26:55.507501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:03.171 BaseBdev3 00:13:03.171 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:03.462 [2024-07-15 18:26:55.742700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.462 [2024-07-15 18:26:55.743301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.462 [2024-07-15 18:26:55.743326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.462 [2024-07-15 18:26:55.743389] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e736dc35400 00:13:03.462 [2024-07-15 18:26:55.743395] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:03.462 [2024-07-15 18:26:55.743428] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e736dca0e20 00:13:03.462 [2024-07-15 18:26:55.743516] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e736dc35400 00:13:03.462 [2024-07-15 18:26:55.743521] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2e736dc35400 00:13:03.462 [2024-07-15 18:26:55.743551] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.462 18:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.735 18:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.735 "name": "raid_bdev1", 00:13:03.735 "uuid": "d0a194d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:03.735 "strip_size_kb": 0, 00:13:03.735 "state": "online", 00:13:03.735 "raid_level": "raid1", 00:13:03.735 "superblock": true, 00:13:03.735 "num_base_bdevs": 3, 00:13:03.735 "num_base_bdevs_discovered": 3, 00:13:03.735 "num_base_bdevs_operational": 3, 00:13:03.735 "base_bdevs_list": [ 00:13:03.735 { 00:13:03.735 "name": "BaseBdev1", 00:13:03.735 "uuid": "10d42705-a9a5-e65d-871f-57d53d2bf0de", 00:13:03.735 "is_configured": true, 00:13:03.735 "data_offset": 2048, 00:13:03.735 "data_size": 63488 00:13:03.735 }, 00:13:03.735 { 00:13:03.735 "name": "BaseBdev2", 00:13:03.735 "uuid": "94105c40-edae-3857-be6f-2c2f1abfd073", 00:13:03.735 "is_configured": true, 00:13:03.735 "data_offset": 2048, 00:13:03.735 "data_size": 63488 00:13:03.735 }, 00:13:03.735 { 00:13:03.735 "name": "BaseBdev3", 00:13:03.735 "uuid": "8bcd4b44-ce74-2956-b443-9c97eec46147", 00:13:03.735 "is_configured": true, 00:13:03.735 "data_offset": 2048, 00:13:03.735 "data_size": 63488 00:13:03.735 } 00:13:03.735 ] 00:13:03.735 }' 00:13:03.735 18:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.735 18:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.993 18:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:03.993 18:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:04.250 [2024-07-15 18:26:56.470982] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e736dca0ec0 00:13:05.182 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:05.442 [2024-07-15 18:26:57.703195] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:05.443 [2024-07-15 18:26:57.703252] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.443 [2024-07-15 18:26:57.703380] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2e736dca0ec0 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.443 18:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.701 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.701 "name": "raid_bdev1", 00:13:05.701 "uuid": "d0a194d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:05.701 "strip_size_kb": 0, 00:13:05.701 "state": "online", 00:13:05.701 "raid_level": "raid1", 00:13:05.701 "superblock": true, 00:13:05.701 "num_base_bdevs": 3, 00:13:05.702 "num_base_bdevs_discovered": 2, 00:13:05.702 "num_base_bdevs_operational": 2, 00:13:05.702 "base_bdevs_list": [ 00:13:05.702 { 00:13:05.702 "name": null, 00:13:05.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.702 "is_configured": false, 00:13:05.702 "data_offset": 2048, 00:13:05.702 "data_size": 63488 00:13:05.702 }, 00:13:05.702 { 00:13:05.702 "name": "BaseBdev2", 00:13:05.702 "uuid": "94105c40-edae-3857-be6f-2c2f1abfd073", 00:13:05.702 "is_configured": true, 00:13:05.702 "data_offset": 2048, 00:13:05.702 "data_size": 63488 00:13:05.702 }, 00:13:05.702 { 00:13:05.702 "name": "BaseBdev3", 00:13:05.702 "uuid": "8bcd4b44-ce74-2956-b443-9c97eec46147", 00:13:05.702 "is_configured": true, 00:13:05.702 "data_offset": 2048, 00:13:05.702 "data_size": 63488 00:13:05.702 } 00:13:05.702 ] 00:13:05.702 }' 00:13:05.702 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.702 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.960 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:06.218 [2024-07-15 18:26:58.596885] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.218 [2024-07-15 18:26:58.596914] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.218 [2024-07-15 18:26:58.597295] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.218 [2024-07-15 18:26:58.597307] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.218 [2024-07-15 18:26:58.597321] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.218 [2024-07-15 18:26:58.597325] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e736dc35400 name raid_bdev1, state offline 00:13:06.218 0 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58278 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 58278 ']' 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 58278 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58278 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:13:06.477 killing process with pid 58278 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58278' 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 58278 00:13:06.477 [2024-07-15 18:26:58.630148] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.477 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 58278 00:13:06.477 [2024-07-15 18:26:58.653215] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.m4ZcrOEvFJ 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:06.736 00:13:06.736 real 0m7.009s 00:13:06.736 user 0m11.018s 00:13:06.736 sys 0m1.248s 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.736 18:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.736 ************************************ 00:13:06.736 END TEST raid_write_error_test 00:13:06.736 ************************************ 00:13:06.736 18:26:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:06.736 18:26:58 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:13:06.736 18:26:58 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:06.736 18:26:58 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:06.736 18:26:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:06.736 18:26:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.736 18:26:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.736 ************************************ 00:13:06.736 START TEST raid_state_function_test 00:13:06.736 ************************************ 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=58407 00:13:06.736 Process raid pid: 58407 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58407' 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 58407 /var/tmp/spdk-raid.sock 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 58407 ']' 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.736 18:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.736 [2024-07-15 18:26:58.934790] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:13:06.736 [2024-07-15 18:26:58.934981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:07.302 EAL: TSC is not safe to use in SMP mode 00:13:07.302 EAL: TSC is not invariant 00:13:07.302 [2024-07-15 18:26:59.535313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.302 [2024-07-15 18:26:59.654816] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:07.302 [2024-07-15 18:26:59.657257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.302 [2024-07-15 18:26:59.658186] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.302 [2024-07-15 18:26:59.658203] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.868 18:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.868 18:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:07.868 18:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:07.868 [2024-07-15 18:27:00.223949] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:07.868 [2024-07-15 18:27:00.224016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:07.868 [2024-07-15 18:27:00.224022] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.868 [2024-07-15 18:27:00.224031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.868 [2024-07-15 18:27:00.224035] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.868 [2024-07-15 18:27:00.224043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.868 [2024-07-15 18:27:00.224046] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:07.868 [2024-07-15 18:27:00.224054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:07.868 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:07.868 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.868 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.869 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.126 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.126 "name": "Existed_Raid", 00:13:08.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.127 "strip_size_kb": 64, 00:13:08.127 "state": "configuring", 00:13:08.127 "raid_level": "raid0", 00:13:08.127 "superblock": false, 00:13:08.127 "num_base_bdevs": 4, 00:13:08.127 "num_base_bdevs_discovered": 0, 00:13:08.127 "num_base_bdevs_operational": 4, 00:13:08.127 "base_bdevs_list": [ 00:13:08.127 { 00:13:08.127 "name": "BaseBdev1", 00:13:08.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.127 "is_configured": false, 00:13:08.127 "data_offset": 0, 00:13:08.127 "data_size": 0 00:13:08.127 }, 00:13:08.127 { 00:13:08.127 "name": "BaseBdev2", 00:13:08.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.127 "is_configured": false, 00:13:08.127 "data_offset": 0, 00:13:08.127 "data_size": 0 00:13:08.127 }, 00:13:08.127 { 00:13:08.127 "name": "BaseBdev3", 00:13:08.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.127 "is_configured": false, 00:13:08.127 "data_offset": 0, 00:13:08.127 "data_size": 0 00:13:08.127 }, 00:13:08.127 { 00:13:08.127 "name": "BaseBdev4", 00:13:08.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.127 "is_configured": false, 00:13:08.127 "data_offset": 0, 00:13:08.127 "data_size": 0 00:13:08.127 } 00:13:08.127 ] 00:13:08.127 }' 00:13:08.127 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.127 18:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.693 18:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:08.693 [2024-07-15 18:27:00.995993] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.693 [2024-07-15 18:27:00.996026] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2bb510c34500 name Existed_Raid, state configuring 00:13:08.693 18:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:08.951 [2024-07-15 18:27:01.284023] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:08.951 [2024-07-15 18:27:01.284084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:08.951 [2024-07-15 18:27:01.284090] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:08.951 [2024-07-15 18:27:01.284099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:08.951 [2024-07-15 18:27:01.284103] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:08.951 [2024-07-15 18:27:01.284111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:08.951 [2024-07-15 18:27:01.284115] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:08.951 [2024-07-15 18:27:01.284122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:08.951 18:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:09.518 [2024-07-15 18:27:01.597169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.518 BaseBdev1 00:13:09.518 18:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:09.518 18:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:09.518 18:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:09.518 18:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:09.518 18:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:09.518 18:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:09.518 18:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:09.775 18:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:10.034 [ 00:13:10.034 { 00:13:10.034 "name": "BaseBdev1", 00:13:10.034 "aliases": [ 00:13:10.034 "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5" 00:13:10.034 ], 00:13:10.034 "product_name": "Malloc disk", 00:13:10.034 "block_size": 512, 00:13:10.034 "num_blocks": 65536, 00:13:10.034 "uuid": "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5", 00:13:10.034 "assigned_rate_limits": { 00:13:10.034 "rw_ios_per_sec": 0, 00:13:10.034 "rw_mbytes_per_sec": 0, 00:13:10.034 "r_mbytes_per_sec": 0, 00:13:10.034 "w_mbytes_per_sec": 0 00:13:10.034 }, 00:13:10.034 "claimed": true, 00:13:10.034 "claim_type": "exclusive_write", 00:13:10.034 "zoned": false, 00:13:10.034 "supported_io_types": { 00:13:10.034 "read": true, 00:13:10.034 "write": true, 00:13:10.034 "unmap": true, 00:13:10.034 "flush": true, 00:13:10.034 "reset": true, 00:13:10.034 "nvme_admin": false, 00:13:10.034 "nvme_io": false, 00:13:10.034 "nvme_io_md": false, 00:13:10.034 "write_zeroes": true, 00:13:10.034 "zcopy": true, 00:13:10.034 "get_zone_info": false, 00:13:10.034 "zone_management": false, 00:13:10.034 "zone_append": false, 00:13:10.034 "compare": false, 00:13:10.034 "compare_and_write": false, 00:13:10.034 "abort": true, 00:13:10.034 "seek_hole": false, 00:13:10.034 "seek_data": false, 00:13:10.034 "copy": true, 00:13:10.034 "nvme_iov_md": false 00:13:10.034 }, 00:13:10.034 "memory_domains": [ 00:13:10.034 { 00:13:10.034 "dma_device_id": "system", 00:13:10.034 "dma_device_type": 1 00:13:10.034 }, 00:13:10.034 { 00:13:10.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.034 "dma_device_type": 2 00:13:10.034 } 00:13:10.034 ], 00:13:10.034 "driver_specific": {} 00:13:10.034 } 00:13:10.034 ] 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.034 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.292 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.292 "name": "Existed_Raid", 00:13:10.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.292 "strip_size_kb": 64, 00:13:10.292 "state": "configuring", 00:13:10.292 "raid_level": "raid0", 00:13:10.292 "superblock": false, 00:13:10.292 "num_base_bdevs": 4, 00:13:10.292 "num_base_bdevs_discovered": 1, 00:13:10.292 "num_base_bdevs_operational": 4, 00:13:10.292 "base_bdevs_list": [ 00:13:10.292 { 00:13:10.292 "name": "BaseBdev1", 00:13:10.292 "uuid": "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5", 00:13:10.292 "is_configured": true, 00:13:10.292 "data_offset": 0, 00:13:10.292 "data_size": 65536 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "name": "BaseBdev2", 00:13:10.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.292 "is_configured": false, 00:13:10.292 "data_offset": 0, 00:13:10.292 "data_size": 0 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "name": "BaseBdev3", 00:13:10.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.292 "is_configured": false, 00:13:10.292 "data_offset": 0, 00:13:10.292 "data_size": 0 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "name": "BaseBdev4", 00:13:10.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.292 "is_configured": false, 00:13:10.292 "data_offset": 0, 00:13:10.292 "data_size": 0 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 }' 00:13:10.292 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.292 18:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.551 18:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:11.116 [2024-07-15 18:27:03.196156] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.116 [2024-07-15 18:27:03.196196] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2bb510c34500 name Existed_Raid, state configuring 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:11.116 [2024-07-15 18:27:03.432190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.116 [2024-07-15 18:27:03.433026] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.116 [2024-07-15 18:27:03.433068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.116 [2024-07-15 18:27:03.433074] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.116 [2024-07-15 18:27:03.433083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.116 [2024-07-15 18:27:03.433087] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:11.116 [2024-07-15 18:27:03.433095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.116 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.680 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:11.680 "name": "Existed_Raid", 00:13:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.680 "strip_size_kb": 64, 00:13:11.680 "state": "configuring", 00:13:11.680 "raid_level": "raid0", 00:13:11.680 "superblock": false, 00:13:11.680 "num_base_bdevs": 4, 00:13:11.680 "num_base_bdevs_discovered": 1, 00:13:11.680 "num_base_bdevs_operational": 4, 00:13:11.680 "base_bdevs_list": [ 00:13:11.680 { 00:13:11.680 "name": "BaseBdev1", 00:13:11.680 "uuid": "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5", 00:13:11.680 "is_configured": true, 00:13:11.680 "data_offset": 0, 00:13:11.680 "data_size": 65536 00:13:11.680 }, 00:13:11.680 { 00:13:11.680 "name": "BaseBdev2", 00:13:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.680 "is_configured": false, 00:13:11.680 "data_offset": 0, 00:13:11.680 "data_size": 0 00:13:11.680 }, 00:13:11.680 { 00:13:11.680 "name": "BaseBdev3", 00:13:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.680 "is_configured": false, 00:13:11.680 "data_offset": 0, 00:13:11.680 "data_size": 0 00:13:11.680 }, 00:13:11.680 { 00:13:11.680 "name": "BaseBdev4", 00:13:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.680 "is_configured": false, 00:13:11.680 "data_offset": 0, 00:13:11.680 "data_size": 0 00:13:11.680 } 00:13:11.680 ] 00:13:11.680 }' 00:13:11.680 18:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:11.680 18:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.938 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:12.243 [2024-07-15 18:27:04.396406] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.243 BaseBdev2 00:13:12.243 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:12.243 18:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:12.243 18:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:12.243 18:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:12.243 18:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:12.243 18:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:12.243 18:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:12.501 18:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:12.760 [ 00:13:12.760 { 00:13:12.760 "name": "BaseBdev2", 00:13:12.760 "aliases": [ 00:13:12.760 "d5ca0325-42d7-11ef-9ade-d5fc5159efa5" 00:13:12.760 ], 00:13:12.760 "product_name": "Malloc disk", 00:13:12.760 "block_size": 512, 00:13:12.760 "num_blocks": 65536, 00:13:12.760 "uuid": "d5ca0325-42d7-11ef-9ade-d5fc5159efa5", 00:13:12.760 "assigned_rate_limits": { 00:13:12.760 "rw_ios_per_sec": 0, 00:13:12.760 "rw_mbytes_per_sec": 0, 00:13:12.760 "r_mbytes_per_sec": 0, 00:13:12.760 "w_mbytes_per_sec": 0 00:13:12.761 }, 00:13:12.761 "claimed": true, 00:13:12.761 "claim_type": "exclusive_write", 00:13:12.761 "zoned": false, 00:13:12.761 "supported_io_types": { 00:13:12.761 "read": true, 00:13:12.761 "write": true, 00:13:12.761 "unmap": true, 00:13:12.761 "flush": true, 00:13:12.761 "reset": true, 00:13:12.761 "nvme_admin": false, 00:13:12.761 "nvme_io": false, 00:13:12.761 "nvme_io_md": false, 00:13:12.761 "write_zeroes": true, 00:13:12.761 "zcopy": true, 00:13:12.761 "get_zone_info": false, 00:13:12.761 "zone_management": false, 00:13:12.761 "zone_append": false, 00:13:12.761 "compare": false, 00:13:12.761 "compare_and_write": false, 00:13:12.761 "abort": true, 00:13:12.761 "seek_hole": false, 00:13:12.761 "seek_data": false, 00:13:12.761 "copy": true, 00:13:12.761 "nvme_iov_md": false 00:13:12.761 }, 00:13:12.761 "memory_domains": [ 00:13:12.761 { 00:13:12.761 "dma_device_id": "system", 00:13:12.761 "dma_device_type": 1 00:13:12.761 }, 00:13:12.761 { 00:13:12.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.761 "dma_device_type": 2 00:13:12.761 } 00:13:12.761 ], 00:13:12.761 "driver_specific": {} 00:13:12.761 } 00:13:12.761 ] 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.761 18:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.019 18:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.019 "name": "Existed_Raid", 00:13:13.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.019 "strip_size_kb": 64, 00:13:13.019 "state": "configuring", 00:13:13.019 "raid_level": "raid0", 00:13:13.019 "superblock": false, 00:13:13.019 "num_base_bdevs": 4, 00:13:13.019 "num_base_bdevs_discovered": 2, 00:13:13.019 "num_base_bdevs_operational": 4, 00:13:13.019 "base_bdevs_list": [ 00:13:13.019 { 00:13:13.019 "name": "BaseBdev1", 00:13:13.019 "uuid": "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5", 00:13:13.020 "is_configured": true, 00:13:13.020 "data_offset": 0, 00:13:13.020 "data_size": 65536 00:13:13.020 }, 00:13:13.020 { 00:13:13.020 "name": "BaseBdev2", 00:13:13.020 "uuid": "d5ca0325-42d7-11ef-9ade-d5fc5159efa5", 00:13:13.020 "is_configured": true, 00:13:13.020 "data_offset": 0, 00:13:13.020 "data_size": 65536 00:13:13.020 }, 00:13:13.020 { 00:13:13.020 "name": "BaseBdev3", 00:13:13.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.020 "is_configured": false, 00:13:13.020 "data_offset": 0, 00:13:13.020 "data_size": 0 00:13:13.020 }, 00:13:13.020 { 00:13:13.020 "name": "BaseBdev4", 00:13:13.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.020 "is_configured": false, 00:13:13.020 "data_offset": 0, 00:13:13.020 "data_size": 0 00:13:13.020 } 00:13:13.020 ] 00:13:13.020 }' 00:13:13.020 18:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.020 18:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.277 18:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:13.535 [2024-07-15 18:27:05.768510] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.535 BaseBdev3 00:13:13.535 18:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:13.535 18:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:13.535 18:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:13.535 18:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:13.535 18:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:13.535 18:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:13.535 18:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:13.794 18:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.053 [ 00:13:14.053 { 00:13:14.053 "name": "BaseBdev3", 00:13:14.053 "aliases": [ 00:13:14.053 "d69b619b-42d7-11ef-9ade-d5fc5159efa5" 00:13:14.053 ], 00:13:14.053 "product_name": "Malloc disk", 00:13:14.053 "block_size": 512, 00:13:14.053 "num_blocks": 65536, 00:13:14.053 "uuid": "d69b619b-42d7-11ef-9ade-d5fc5159efa5", 00:13:14.053 "assigned_rate_limits": { 00:13:14.053 "rw_ios_per_sec": 0, 00:13:14.053 "rw_mbytes_per_sec": 0, 00:13:14.053 "r_mbytes_per_sec": 0, 00:13:14.053 "w_mbytes_per_sec": 0 00:13:14.053 }, 00:13:14.053 "claimed": true, 00:13:14.053 "claim_type": "exclusive_write", 00:13:14.053 "zoned": false, 00:13:14.053 "supported_io_types": { 00:13:14.053 "read": true, 00:13:14.053 "write": true, 00:13:14.053 "unmap": true, 00:13:14.053 "flush": true, 00:13:14.053 "reset": true, 00:13:14.053 "nvme_admin": false, 00:13:14.053 "nvme_io": false, 00:13:14.053 "nvme_io_md": false, 00:13:14.053 "write_zeroes": true, 00:13:14.053 "zcopy": true, 00:13:14.053 "get_zone_info": false, 00:13:14.053 "zone_management": false, 00:13:14.053 "zone_append": false, 00:13:14.053 "compare": false, 00:13:14.053 "compare_and_write": false, 00:13:14.053 "abort": true, 00:13:14.053 "seek_hole": false, 00:13:14.053 "seek_data": false, 00:13:14.053 "copy": true, 00:13:14.053 "nvme_iov_md": false 00:13:14.053 }, 00:13:14.053 "memory_domains": [ 00:13:14.053 { 00:13:14.053 "dma_device_id": "system", 00:13:14.053 "dma_device_type": 1 00:13:14.053 }, 00:13:14.053 { 00:13:14.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.053 "dma_device_type": 2 00:13:14.053 } 00:13:14.053 ], 00:13:14.053 "driver_specific": {} 00:13:14.053 } 00:13:14.053 ] 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.053 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.311 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.311 "name": "Existed_Raid", 00:13:14.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.311 "strip_size_kb": 64, 00:13:14.311 "state": "configuring", 00:13:14.311 "raid_level": "raid0", 00:13:14.311 "superblock": false, 00:13:14.311 "num_base_bdevs": 4, 00:13:14.311 "num_base_bdevs_discovered": 3, 00:13:14.311 "num_base_bdevs_operational": 4, 00:13:14.311 "base_bdevs_list": [ 00:13:14.311 { 00:13:14.311 "name": "BaseBdev1", 00:13:14.311 "uuid": "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5", 00:13:14.311 "is_configured": true, 00:13:14.311 "data_offset": 0, 00:13:14.311 "data_size": 65536 00:13:14.311 }, 00:13:14.311 { 00:13:14.311 "name": "BaseBdev2", 00:13:14.311 "uuid": "d5ca0325-42d7-11ef-9ade-d5fc5159efa5", 00:13:14.311 "is_configured": true, 00:13:14.311 "data_offset": 0, 00:13:14.311 "data_size": 65536 00:13:14.311 }, 00:13:14.311 { 00:13:14.311 "name": "BaseBdev3", 00:13:14.311 "uuid": "d69b619b-42d7-11ef-9ade-d5fc5159efa5", 00:13:14.311 "is_configured": true, 00:13:14.311 "data_offset": 0, 00:13:14.311 "data_size": 65536 00:13:14.311 }, 00:13:14.311 { 00:13:14.311 "name": "BaseBdev4", 00:13:14.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.311 "is_configured": false, 00:13:14.311 "data_offset": 0, 00:13:14.311 "data_size": 0 00:13:14.311 } 00:13:14.311 ] 00:13:14.311 }' 00:13:14.311 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.311 18:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.569 18:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:14.827 [2024-07-15 18:27:07.192611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.827 [2024-07-15 18:27:07.192641] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2bb510c34a00 00:13:14.828 [2024-07-15 18:27:07.192646] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:14.828 [2024-07-15 18:27:07.192678] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2bb510c97e20 00:13:14.828 [2024-07-15 18:27:07.192784] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2bb510c34a00 00:13:14.828 [2024-07-15 18:27:07.192789] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2bb510c34a00 00:13:14.828 [2024-07-15 18:27:07.192822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.828 BaseBdev4 00:13:14.828 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:14.828 18:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:14.828 18:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:14.828 18:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:14.828 18:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:14.828 18:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:14.828 18:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:15.394 [ 00:13:15.394 { 00:13:15.394 "name": "BaseBdev4", 00:13:15.394 "aliases": [ 00:13:15.394 "d774ae7f-42d7-11ef-9ade-d5fc5159efa5" 00:13:15.394 ], 00:13:15.394 "product_name": "Malloc disk", 00:13:15.394 "block_size": 512, 00:13:15.394 "num_blocks": 65536, 00:13:15.394 "uuid": "d774ae7f-42d7-11ef-9ade-d5fc5159efa5", 00:13:15.394 "assigned_rate_limits": { 00:13:15.394 "rw_ios_per_sec": 0, 00:13:15.394 "rw_mbytes_per_sec": 0, 00:13:15.394 "r_mbytes_per_sec": 0, 00:13:15.394 "w_mbytes_per_sec": 0 00:13:15.394 }, 00:13:15.394 "claimed": true, 00:13:15.394 "claim_type": "exclusive_write", 00:13:15.394 "zoned": false, 00:13:15.394 "supported_io_types": { 00:13:15.394 "read": true, 00:13:15.394 "write": true, 00:13:15.394 "unmap": true, 00:13:15.394 "flush": true, 00:13:15.394 "reset": true, 00:13:15.394 "nvme_admin": false, 00:13:15.394 "nvme_io": false, 00:13:15.394 "nvme_io_md": false, 00:13:15.394 "write_zeroes": true, 00:13:15.394 "zcopy": true, 00:13:15.394 "get_zone_info": false, 00:13:15.394 "zone_management": false, 00:13:15.394 "zone_append": false, 00:13:15.394 "compare": false, 00:13:15.394 "compare_and_write": false, 00:13:15.394 "abort": true, 00:13:15.394 "seek_hole": false, 00:13:15.394 "seek_data": false, 00:13:15.394 "copy": true, 00:13:15.394 "nvme_iov_md": false 00:13:15.394 }, 00:13:15.394 "memory_domains": [ 00:13:15.394 { 00:13:15.394 "dma_device_id": "system", 00:13:15.394 "dma_device_type": 1 00:13:15.394 }, 00:13:15.394 { 00:13:15.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.394 "dma_device_type": 2 00:13:15.394 } 00:13:15.394 ], 00:13:15.394 "driver_specific": {} 00:13:15.394 } 00:13:15.394 ] 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.394 18:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.652 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.652 "name": "Existed_Raid", 00:13:15.652 "uuid": "d774b54d-42d7-11ef-9ade-d5fc5159efa5", 00:13:15.652 "strip_size_kb": 64, 00:13:15.652 "state": "online", 00:13:15.652 "raid_level": "raid0", 00:13:15.652 "superblock": false, 00:13:15.652 "num_base_bdevs": 4, 00:13:15.652 "num_base_bdevs_discovered": 4, 00:13:15.652 "num_base_bdevs_operational": 4, 00:13:15.652 "base_bdevs_list": [ 00:13:15.652 { 00:13:15.652 "name": "BaseBdev1", 00:13:15.652 "uuid": "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5", 00:13:15.652 "is_configured": true, 00:13:15.652 "data_offset": 0, 00:13:15.652 "data_size": 65536 00:13:15.652 }, 00:13:15.652 { 00:13:15.652 "name": "BaseBdev2", 00:13:15.652 "uuid": "d5ca0325-42d7-11ef-9ade-d5fc5159efa5", 00:13:15.652 "is_configured": true, 00:13:15.652 "data_offset": 0, 00:13:15.652 "data_size": 65536 00:13:15.652 }, 00:13:15.652 { 00:13:15.652 "name": "BaseBdev3", 00:13:15.652 "uuid": "d69b619b-42d7-11ef-9ade-d5fc5159efa5", 00:13:15.652 "is_configured": true, 00:13:15.652 "data_offset": 0, 00:13:15.652 "data_size": 65536 00:13:15.652 }, 00:13:15.652 { 00:13:15.652 "name": "BaseBdev4", 00:13:15.652 "uuid": "d774ae7f-42d7-11ef-9ade-d5fc5159efa5", 00:13:15.652 "is_configured": true, 00:13:15.652 "data_offset": 0, 00:13:15.652 "data_size": 65536 00:13:15.652 } 00:13:15.652 ] 00:13:15.652 }' 00:13:15.652 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.652 18:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.223 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.223 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:16.223 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:16.223 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:16.223 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:16.223 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:16.223 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:16.223 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:16.481 [2024-07-15 18:27:08.632633] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.481 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:16.481 "name": "Existed_Raid", 00:13:16.481 "aliases": [ 00:13:16.481 "d774b54d-42d7-11ef-9ade-d5fc5159efa5" 00:13:16.481 ], 00:13:16.481 "product_name": "Raid Volume", 00:13:16.481 "block_size": 512, 00:13:16.481 "num_blocks": 262144, 00:13:16.481 "uuid": "d774b54d-42d7-11ef-9ade-d5fc5159efa5", 00:13:16.481 "assigned_rate_limits": { 00:13:16.482 "rw_ios_per_sec": 0, 00:13:16.482 "rw_mbytes_per_sec": 0, 00:13:16.482 "r_mbytes_per_sec": 0, 00:13:16.482 "w_mbytes_per_sec": 0 00:13:16.482 }, 00:13:16.482 "claimed": false, 00:13:16.482 "zoned": false, 00:13:16.482 "supported_io_types": { 00:13:16.482 "read": true, 00:13:16.482 "write": true, 00:13:16.482 "unmap": true, 00:13:16.482 "flush": true, 00:13:16.482 "reset": true, 00:13:16.482 "nvme_admin": false, 00:13:16.482 "nvme_io": false, 00:13:16.482 "nvme_io_md": false, 00:13:16.482 "write_zeroes": true, 00:13:16.482 "zcopy": false, 00:13:16.482 "get_zone_info": false, 00:13:16.482 "zone_management": false, 00:13:16.482 "zone_append": false, 00:13:16.482 "compare": false, 00:13:16.482 "compare_and_write": false, 00:13:16.482 "abort": false, 00:13:16.482 "seek_hole": false, 00:13:16.482 "seek_data": false, 00:13:16.482 "copy": false, 00:13:16.482 "nvme_iov_md": false 00:13:16.482 }, 00:13:16.482 "memory_domains": [ 00:13:16.482 { 00:13:16.482 "dma_device_id": "system", 00:13:16.482 "dma_device_type": 1 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.482 "dma_device_type": 2 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "dma_device_id": "system", 00:13:16.482 "dma_device_type": 1 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.482 "dma_device_type": 2 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "dma_device_id": "system", 00:13:16.482 "dma_device_type": 1 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.482 "dma_device_type": 2 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "dma_device_id": "system", 00:13:16.482 "dma_device_type": 1 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.482 "dma_device_type": 2 00:13:16.482 } 00:13:16.482 ], 00:13:16.482 "driver_specific": { 00:13:16.482 "raid": { 00:13:16.482 "uuid": "d774b54d-42d7-11ef-9ade-d5fc5159efa5", 00:13:16.482 "strip_size_kb": 64, 00:13:16.482 "state": "online", 00:13:16.482 "raid_level": "raid0", 00:13:16.482 "superblock": false, 00:13:16.482 "num_base_bdevs": 4, 00:13:16.482 "num_base_bdevs_discovered": 4, 00:13:16.482 "num_base_bdevs_operational": 4, 00:13:16.482 "base_bdevs_list": [ 00:13:16.482 { 00:13:16.482 "name": "BaseBdev1", 00:13:16.482 "uuid": "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5", 00:13:16.482 "is_configured": true, 00:13:16.482 "data_offset": 0, 00:13:16.482 "data_size": 65536 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "name": "BaseBdev2", 00:13:16.482 "uuid": "d5ca0325-42d7-11ef-9ade-d5fc5159efa5", 00:13:16.482 "is_configured": true, 00:13:16.482 "data_offset": 0, 00:13:16.482 "data_size": 65536 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "name": "BaseBdev3", 00:13:16.482 "uuid": "d69b619b-42d7-11ef-9ade-d5fc5159efa5", 00:13:16.482 "is_configured": true, 00:13:16.482 "data_offset": 0, 00:13:16.482 "data_size": 65536 00:13:16.482 }, 00:13:16.482 { 00:13:16.482 "name": "BaseBdev4", 00:13:16.482 "uuid": "d774ae7f-42d7-11ef-9ade-d5fc5159efa5", 00:13:16.482 "is_configured": true, 00:13:16.482 "data_offset": 0, 00:13:16.482 "data_size": 65536 00:13:16.482 } 00:13:16.482 ] 00:13:16.482 } 00:13:16.482 } 00:13:16.482 }' 00:13:16.482 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.482 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:16.482 BaseBdev2 00:13:16.482 BaseBdev3 00:13:16.482 BaseBdev4' 00:13:16.482 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:16.482 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:16.482 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:16.743 "name": "BaseBdev1", 00:13:16.743 "aliases": [ 00:13:16.743 "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5" 00:13:16.743 ], 00:13:16.743 "product_name": "Malloc disk", 00:13:16.743 "block_size": 512, 00:13:16.743 "num_blocks": 65536, 00:13:16.743 "uuid": "d41ebc0d-42d7-11ef-9ade-d5fc5159efa5", 00:13:16.743 "assigned_rate_limits": { 00:13:16.743 "rw_ios_per_sec": 0, 00:13:16.743 "rw_mbytes_per_sec": 0, 00:13:16.743 "r_mbytes_per_sec": 0, 00:13:16.743 "w_mbytes_per_sec": 0 00:13:16.743 }, 00:13:16.743 "claimed": true, 00:13:16.743 "claim_type": "exclusive_write", 00:13:16.743 "zoned": false, 00:13:16.743 "supported_io_types": { 00:13:16.743 "read": true, 00:13:16.743 "write": true, 00:13:16.743 "unmap": true, 00:13:16.743 "flush": true, 00:13:16.743 "reset": true, 00:13:16.743 "nvme_admin": false, 00:13:16.743 "nvme_io": false, 00:13:16.743 "nvme_io_md": false, 00:13:16.743 "write_zeroes": true, 00:13:16.743 "zcopy": true, 00:13:16.743 "get_zone_info": false, 00:13:16.743 "zone_management": false, 00:13:16.743 "zone_append": false, 00:13:16.743 "compare": false, 00:13:16.743 "compare_and_write": false, 00:13:16.743 "abort": true, 00:13:16.743 "seek_hole": false, 00:13:16.743 "seek_data": false, 00:13:16.743 "copy": true, 00:13:16.743 "nvme_iov_md": false 00:13:16.743 }, 00:13:16.743 "memory_domains": [ 00:13:16.743 { 00:13:16.743 "dma_device_id": "system", 00:13:16.743 "dma_device_type": 1 00:13:16.743 }, 00:13:16.743 { 00:13:16.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.743 "dma_device_type": 2 00:13:16.743 } 00:13:16.743 ], 00:13:16.743 "driver_specific": {} 00:13:16.743 }' 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.743 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.743 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.743 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:16.743 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:16.743 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:16.743 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:17.001 "name": "BaseBdev2", 00:13:17.001 "aliases": [ 00:13:17.001 "d5ca0325-42d7-11ef-9ade-d5fc5159efa5" 00:13:17.001 ], 00:13:17.001 "product_name": "Malloc disk", 00:13:17.001 "block_size": 512, 00:13:17.001 "num_blocks": 65536, 00:13:17.001 "uuid": "d5ca0325-42d7-11ef-9ade-d5fc5159efa5", 00:13:17.001 "assigned_rate_limits": { 00:13:17.001 "rw_ios_per_sec": 0, 00:13:17.001 "rw_mbytes_per_sec": 0, 00:13:17.001 "r_mbytes_per_sec": 0, 00:13:17.001 "w_mbytes_per_sec": 0 00:13:17.001 }, 00:13:17.001 "claimed": true, 00:13:17.001 "claim_type": "exclusive_write", 00:13:17.001 "zoned": false, 00:13:17.001 "supported_io_types": { 00:13:17.001 "read": true, 00:13:17.001 "write": true, 00:13:17.001 "unmap": true, 00:13:17.001 "flush": true, 00:13:17.001 "reset": true, 00:13:17.001 "nvme_admin": false, 00:13:17.001 "nvme_io": false, 00:13:17.001 "nvme_io_md": false, 00:13:17.001 "write_zeroes": true, 00:13:17.001 "zcopy": true, 00:13:17.001 "get_zone_info": false, 00:13:17.001 "zone_management": false, 00:13:17.001 "zone_append": false, 00:13:17.001 "compare": false, 00:13:17.001 "compare_and_write": false, 00:13:17.001 "abort": true, 00:13:17.001 "seek_hole": false, 00:13:17.001 "seek_data": false, 00:13:17.001 "copy": true, 00:13:17.001 "nvme_iov_md": false 00:13:17.001 }, 00:13:17.001 "memory_domains": [ 00:13:17.001 { 00:13:17.001 "dma_device_id": "system", 00:13:17.001 "dma_device_type": 1 00:13:17.001 }, 00:13:17.001 { 00:13:17.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.001 "dma_device_type": 2 00:13:17.001 } 00:13:17.001 ], 00:13:17.001 "driver_specific": {} 00:13:17.001 }' 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.001 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:17.002 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.002 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.002 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:17.002 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:17.002 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:17.002 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:17.260 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:17.260 "name": "BaseBdev3", 00:13:17.260 "aliases": [ 00:13:17.260 "d69b619b-42d7-11ef-9ade-d5fc5159efa5" 00:13:17.260 ], 00:13:17.260 "product_name": "Malloc disk", 00:13:17.260 "block_size": 512, 00:13:17.260 "num_blocks": 65536, 00:13:17.260 "uuid": "d69b619b-42d7-11ef-9ade-d5fc5159efa5", 00:13:17.260 "assigned_rate_limits": { 00:13:17.260 "rw_ios_per_sec": 0, 00:13:17.260 "rw_mbytes_per_sec": 0, 00:13:17.260 "r_mbytes_per_sec": 0, 00:13:17.260 "w_mbytes_per_sec": 0 00:13:17.260 }, 00:13:17.260 "claimed": true, 00:13:17.260 "claim_type": "exclusive_write", 00:13:17.260 "zoned": false, 00:13:17.260 "supported_io_types": { 00:13:17.260 "read": true, 00:13:17.260 "write": true, 00:13:17.260 "unmap": true, 00:13:17.260 "flush": true, 00:13:17.260 "reset": true, 00:13:17.260 "nvme_admin": false, 00:13:17.260 "nvme_io": false, 00:13:17.260 "nvme_io_md": false, 00:13:17.260 "write_zeroes": true, 00:13:17.260 "zcopy": true, 00:13:17.260 "get_zone_info": false, 00:13:17.260 "zone_management": false, 00:13:17.260 "zone_append": false, 00:13:17.260 "compare": false, 00:13:17.260 "compare_and_write": false, 00:13:17.260 "abort": true, 00:13:17.260 "seek_hole": false, 00:13:17.260 "seek_data": false, 00:13:17.260 "copy": true, 00:13:17.260 "nvme_iov_md": false 00:13:17.260 }, 00:13:17.260 "memory_domains": [ 00:13:17.260 { 00:13:17.260 "dma_device_id": "system", 00:13:17.260 "dma_device_type": 1 00:13:17.260 }, 00:13:17.260 { 00:13:17.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.260 "dma_device_type": 2 00:13:17.260 } 00:13:17.260 ], 00:13:17.260 "driver_specific": {} 00:13:17.260 }' 00:13:17.260 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.260 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.260 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:17.260 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.260 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.260 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:17.260 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.518 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.518 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:17.518 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.518 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.518 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:17.518 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:17.518 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:17.518 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:17.777 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:17.777 "name": "BaseBdev4", 00:13:17.777 "aliases": [ 00:13:17.777 "d774ae7f-42d7-11ef-9ade-d5fc5159efa5" 00:13:17.777 ], 00:13:17.777 "product_name": "Malloc disk", 00:13:17.777 "block_size": 512, 00:13:17.777 "num_blocks": 65536, 00:13:17.777 "uuid": "d774ae7f-42d7-11ef-9ade-d5fc5159efa5", 00:13:17.777 "assigned_rate_limits": { 00:13:17.777 "rw_ios_per_sec": 0, 00:13:17.777 "rw_mbytes_per_sec": 0, 00:13:17.777 "r_mbytes_per_sec": 0, 00:13:17.777 "w_mbytes_per_sec": 0 00:13:17.777 }, 00:13:17.777 "claimed": true, 00:13:17.777 "claim_type": "exclusive_write", 00:13:17.777 "zoned": false, 00:13:17.777 "supported_io_types": { 00:13:17.777 "read": true, 00:13:17.777 "write": true, 00:13:17.777 "unmap": true, 00:13:17.777 "flush": true, 00:13:17.777 "reset": true, 00:13:17.777 "nvme_admin": false, 00:13:17.777 "nvme_io": false, 00:13:17.777 "nvme_io_md": false, 00:13:17.777 "write_zeroes": true, 00:13:17.777 "zcopy": true, 00:13:17.777 "get_zone_info": false, 00:13:17.777 "zone_management": false, 00:13:17.777 "zone_append": false, 00:13:17.777 "compare": false, 00:13:17.777 "compare_and_write": false, 00:13:17.777 "abort": true, 00:13:17.777 "seek_hole": false, 00:13:17.777 "seek_data": false, 00:13:17.777 "copy": true, 00:13:17.777 "nvme_iov_md": false 00:13:17.777 }, 00:13:17.777 "memory_domains": [ 00:13:17.777 { 00:13:17.777 "dma_device_id": "system", 00:13:17.778 "dma_device_type": 1 00:13:17.778 }, 00:13:17.778 { 00:13:17.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.778 "dma_device_type": 2 00:13:17.778 } 00:13:17.778 ], 00:13:17.778 "driver_specific": {} 00:13:17.778 }' 00:13:17.778 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.778 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.778 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:17.778 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.778 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.778 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:17.778 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.778 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.778 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:17.778 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.778 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.778 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:17.778 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:18.036 [2024-07-15 18:27:10.316738] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.036 [2024-07-15 18:27:10.316768] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.036 [2024-07-15 18:27:10.316800] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.036 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:18.036 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:18.036 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:18.036 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.037 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.295 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.295 "name": "Existed_Raid", 00:13:18.295 "uuid": "d774b54d-42d7-11ef-9ade-d5fc5159efa5", 00:13:18.295 "strip_size_kb": 64, 00:13:18.295 "state": "offline", 00:13:18.295 "raid_level": "raid0", 00:13:18.295 "superblock": false, 00:13:18.295 "num_base_bdevs": 4, 00:13:18.295 "num_base_bdevs_discovered": 3, 00:13:18.295 "num_base_bdevs_operational": 3, 00:13:18.295 "base_bdevs_list": [ 00:13:18.295 { 00:13:18.295 "name": null, 00:13:18.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.295 "is_configured": false, 00:13:18.295 "data_offset": 0, 00:13:18.295 "data_size": 65536 00:13:18.295 }, 00:13:18.295 { 00:13:18.295 "name": "BaseBdev2", 00:13:18.295 "uuid": "d5ca0325-42d7-11ef-9ade-d5fc5159efa5", 00:13:18.295 "is_configured": true, 00:13:18.295 "data_offset": 0, 00:13:18.295 "data_size": 65536 00:13:18.295 }, 00:13:18.295 { 00:13:18.295 "name": "BaseBdev3", 00:13:18.295 "uuid": "d69b619b-42d7-11ef-9ade-d5fc5159efa5", 00:13:18.295 "is_configured": true, 00:13:18.295 "data_offset": 0, 00:13:18.295 "data_size": 65536 00:13:18.295 }, 00:13:18.295 { 00:13:18.295 "name": "BaseBdev4", 00:13:18.295 "uuid": "d774ae7f-42d7-11ef-9ade-d5fc5159efa5", 00:13:18.295 "is_configured": true, 00:13:18.295 "data_offset": 0, 00:13:18.295 "data_size": 65536 00:13:18.295 } 00:13:18.295 ] 00:13:18.295 }' 00:13:18.295 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.295 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.554 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:18.554 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:18.554 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:18.554 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.120 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:19.120 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:19.120 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:19.120 [2024-07-15 18:27:11.422674] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.120 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:19.120 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:19.120 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.120 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:19.379 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:19.379 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:19.379 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:19.637 [2024-07-15 18:27:11.910952] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:19.637 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:19.637 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:19.637 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:19.637 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.904 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:19.904 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:19.904 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:20.166 [2024-07-15 18:27:12.483484] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:20.166 [2024-07-15 18:27:12.483519] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2bb510c34a00 name Existed_Raid, state offline 00:13:20.166 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:20.166 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:20.166 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.166 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:20.424 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:20.424 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:20.424 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:20.424 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:20.424 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:20.424 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:20.683 BaseBdev2 00:13:20.683 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:20.683 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:20.683 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:20.683 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:20.683 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:20.683 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:20.683 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:20.940 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:21.198 [ 00:13:21.198 { 00:13:21.198 "name": "BaseBdev2", 00:13:21.198 "aliases": [ 00:13:21.198 "daefea72-42d7-11ef-9ade-d5fc5159efa5" 00:13:21.198 ], 00:13:21.198 "product_name": "Malloc disk", 00:13:21.198 "block_size": 512, 00:13:21.198 "num_blocks": 65536, 00:13:21.198 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:21.198 "assigned_rate_limits": { 00:13:21.198 "rw_ios_per_sec": 0, 00:13:21.198 "rw_mbytes_per_sec": 0, 00:13:21.198 "r_mbytes_per_sec": 0, 00:13:21.198 "w_mbytes_per_sec": 0 00:13:21.198 }, 00:13:21.198 "claimed": false, 00:13:21.198 "zoned": false, 00:13:21.198 "supported_io_types": { 00:13:21.198 "read": true, 00:13:21.198 "write": true, 00:13:21.198 "unmap": true, 00:13:21.198 "flush": true, 00:13:21.198 "reset": true, 00:13:21.198 "nvme_admin": false, 00:13:21.198 "nvme_io": false, 00:13:21.198 "nvme_io_md": false, 00:13:21.198 "write_zeroes": true, 00:13:21.198 "zcopy": true, 00:13:21.198 "get_zone_info": false, 00:13:21.198 "zone_management": false, 00:13:21.198 "zone_append": false, 00:13:21.198 "compare": false, 00:13:21.198 "compare_and_write": false, 00:13:21.198 "abort": true, 00:13:21.198 "seek_hole": false, 00:13:21.198 "seek_data": false, 00:13:21.198 "copy": true, 00:13:21.198 "nvme_iov_md": false 00:13:21.198 }, 00:13:21.198 "memory_domains": [ 00:13:21.198 { 00:13:21.198 "dma_device_id": "system", 00:13:21.198 "dma_device_type": 1 00:13:21.198 }, 00:13:21.198 { 00:13:21.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.198 "dma_device_type": 2 00:13:21.198 } 00:13:21.198 ], 00:13:21.198 "driver_specific": {} 00:13:21.198 } 00:13:21.198 ] 00:13:21.198 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:21.198 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:21.198 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:21.198 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:21.455 BaseBdev3 00:13:21.712 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:21.712 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:21.712 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:21.712 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:21.712 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:21.712 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:21.712 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:21.969 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:22.227 [ 00:13:22.227 { 00:13:22.227 "name": "BaseBdev3", 00:13:22.227 "aliases": [ 00:13:22.227 "db696223-42d7-11ef-9ade-d5fc5159efa5" 00:13:22.227 ], 00:13:22.227 "product_name": "Malloc disk", 00:13:22.227 "block_size": 512, 00:13:22.227 "num_blocks": 65536, 00:13:22.227 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:22.227 "assigned_rate_limits": { 00:13:22.227 "rw_ios_per_sec": 0, 00:13:22.227 "rw_mbytes_per_sec": 0, 00:13:22.227 "r_mbytes_per_sec": 0, 00:13:22.227 "w_mbytes_per_sec": 0 00:13:22.227 }, 00:13:22.227 "claimed": false, 00:13:22.227 "zoned": false, 00:13:22.227 "supported_io_types": { 00:13:22.227 "read": true, 00:13:22.227 "write": true, 00:13:22.227 "unmap": true, 00:13:22.227 "flush": true, 00:13:22.227 "reset": true, 00:13:22.227 "nvme_admin": false, 00:13:22.227 "nvme_io": false, 00:13:22.227 "nvme_io_md": false, 00:13:22.227 "write_zeroes": true, 00:13:22.227 "zcopy": true, 00:13:22.227 "get_zone_info": false, 00:13:22.227 "zone_management": false, 00:13:22.227 "zone_append": false, 00:13:22.227 "compare": false, 00:13:22.227 "compare_and_write": false, 00:13:22.227 "abort": true, 00:13:22.227 "seek_hole": false, 00:13:22.227 "seek_data": false, 00:13:22.227 "copy": true, 00:13:22.227 "nvme_iov_md": false 00:13:22.227 }, 00:13:22.227 "memory_domains": [ 00:13:22.227 { 00:13:22.227 "dma_device_id": "system", 00:13:22.227 "dma_device_type": 1 00:13:22.227 }, 00:13:22.227 { 00:13:22.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.227 "dma_device_type": 2 00:13:22.227 } 00:13:22.227 ], 00:13:22.227 "driver_specific": {} 00:13:22.227 } 00:13:22.227 ] 00:13:22.227 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:22.227 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:22.227 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:22.227 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:22.485 BaseBdev4 00:13:22.485 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:22.485 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:22.485 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:22.485 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:22.485 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:22.485 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:22.485 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:22.743 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:22.743 [ 00:13:22.743 { 00:13:22.743 "name": "BaseBdev4", 00:13:22.743 "aliases": [ 00:13:22.743 "dbe2da14-42d7-11ef-9ade-d5fc5159efa5" 00:13:22.743 ], 00:13:22.743 "product_name": "Malloc disk", 00:13:22.743 "block_size": 512, 00:13:22.743 "num_blocks": 65536, 00:13:22.743 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:22.743 "assigned_rate_limits": { 00:13:22.743 "rw_ios_per_sec": 0, 00:13:22.743 "rw_mbytes_per_sec": 0, 00:13:22.743 "r_mbytes_per_sec": 0, 00:13:22.743 "w_mbytes_per_sec": 0 00:13:22.743 }, 00:13:22.743 "claimed": false, 00:13:22.743 "zoned": false, 00:13:22.743 "supported_io_types": { 00:13:22.743 "read": true, 00:13:22.743 "write": true, 00:13:22.743 "unmap": true, 00:13:22.743 "flush": true, 00:13:22.743 "reset": true, 00:13:22.743 "nvme_admin": false, 00:13:22.743 "nvme_io": false, 00:13:22.743 "nvme_io_md": false, 00:13:22.743 "write_zeroes": true, 00:13:22.743 "zcopy": true, 00:13:22.743 "get_zone_info": false, 00:13:22.743 "zone_management": false, 00:13:22.743 "zone_append": false, 00:13:22.743 "compare": false, 00:13:22.743 "compare_and_write": false, 00:13:22.743 "abort": true, 00:13:22.743 "seek_hole": false, 00:13:22.743 "seek_data": false, 00:13:22.743 "copy": true, 00:13:22.743 "nvme_iov_md": false 00:13:22.743 }, 00:13:22.743 "memory_domains": [ 00:13:22.743 { 00:13:22.743 "dma_device_id": "system", 00:13:22.743 "dma_device_type": 1 00:13:22.743 }, 00:13:22.743 { 00:13:22.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.743 "dma_device_type": 2 00:13:22.743 } 00:13:22.743 ], 00:13:22.743 "driver_specific": {} 00:13:22.743 } 00:13:22.743 ] 00:13:23.001 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:23.001 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:23.001 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:23.001 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:23.260 [2024-07-15 18:27:15.405452] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.260 [2024-07-15 18:27:15.405519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.260 [2024-07-15 18:27:15.405530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.260 [2024-07-15 18:27:15.406118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.260 [2024-07-15 18:27:15.406149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.260 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.519 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:23.519 "name": "Existed_Raid", 00:13:23.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.519 "strip_size_kb": 64, 00:13:23.519 "state": "configuring", 00:13:23.519 "raid_level": "raid0", 00:13:23.519 "superblock": false, 00:13:23.519 "num_base_bdevs": 4, 00:13:23.519 "num_base_bdevs_discovered": 3, 00:13:23.519 "num_base_bdevs_operational": 4, 00:13:23.519 "base_bdevs_list": [ 00:13:23.519 { 00:13:23.519 "name": "BaseBdev1", 00:13:23.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.519 "is_configured": false, 00:13:23.519 "data_offset": 0, 00:13:23.519 "data_size": 0 00:13:23.519 }, 00:13:23.519 { 00:13:23.519 "name": "BaseBdev2", 00:13:23.519 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:23.519 "is_configured": true, 00:13:23.519 "data_offset": 0, 00:13:23.519 "data_size": 65536 00:13:23.519 }, 00:13:23.519 { 00:13:23.519 "name": "BaseBdev3", 00:13:23.519 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:23.519 "is_configured": true, 00:13:23.519 "data_offset": 0, 00:13:23.519 "data_size": 65536 00:13:23.519 }, 00:13:23.519 { 00:13:23.519 "name": "BaseBdev4", 00:13:23.519 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:23.519 "is_configured": true, 00:13:23.519 "data_offset": 0, 00:13:23.519 "data_size": 65536 00:13:23.519 } 00:13:23.519 ] 00:13:23.519 }' 00:13:23.519 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:23.519 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.778 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:24.038 [2024-07-15 18:27:16.373512] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.038 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.311 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.311 "name": "Existed_Raid", 00:13:24.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.311 "strip_size_kb": 64, 00:13:24.311 "state": "configuring", 00:13:24.311 "raid_level": "raid0", 00:13:24.311 "superblock": false, 00:13:24.311 "num_base_bdevs": 4, 00:13:24.311 "num_base_bdevs_discovered": 2, 00:13:24.311 "num_base_bdevs_operational": 4, 00:13:24.311 "base_bdevs_list": [ 00:13:24.311 { 00:13:24.311 "name": "BaseBdev1", 00:13:24.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.311 "is_configured": false, 00:13:24.311 "data_offset": 0, 00:13:24.311 "data_size": 0 00:13:24.311 }, 00:13:24.311 { 00:13:24.311 "name": null, 00:13:24.311 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:24.312 "is_configured": false, 00:13:24.312 "data_offset": 0, 00:13:24.312 "data_size": 65536 00:13:24.312 }, 00:13:24.312 { 00:13:24.312 "name": "BaseBdev3", 00:13:24.312 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:24.312 "is_configured": true, 00:13:24.312 "data_offset": 0, 00:13:24.312 "data_size": 65536 00:13:24.312 }, 00:13:24.312 { 00:13:24.312 "name": "BaseBdev4", 00:13:24.312 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:24.312 "is_configured": true, 00:13:24.312 "data_offset": 0, 00:13:24.312 "data_size": 65536 00:13:24.312 } 00:13:24.312 ] 00:13:24.312 }' 00:13:24.312 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.579 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.838 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.838 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:25.097 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:25.097 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:25.356 [2024-07-15 18:27:17.613743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.356 BaseBdev1 00:13:25.356 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:25.356 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:25.356 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:25.356 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:25.356 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:25.356 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:25.356 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.614 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:25.873 [ 00:13:25.873 { 00:13:25.873 "name": "BaseBdev1", 00:13:25.873 "aliases": [ 00:13:25.873 "ddaad206-42d7-11ef-9ade-d5fc5159efa5" 00:13:25.873 ], 00:13:25.873 "product_name": "Malloc disk", 00:13:25.873 "block_size": 512, 00:13:25.873 "num_blocks": 65536, 00:13:25.873 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:25.873 "assigned_rate_limits": { 00:13:25.873 "rw_ios_per_sec": 0, 00:13:25.873 "rw_mbytes_per_sec": 0, 00:13:25.873 "r_mbytes_per_sec": 0, 00:13:25.873 "w_mbytes_per_sec": 0 00:13:25.873 }, 00:13:25.873 "claimed": true, 00:13:25.873 "claim_type": "exclusive_write", 00:13:25.873 "zoned": false, 00:13:25.873 "supported_io_types": { 00:13:25.873 "read": true, 00:13:25.873 "write": true, 00:13:25.873 "unmap": true, 00:13:25.873 "flush": true, 00:13:25.873 "reset": true, 00:13:25.873 "nvme_admin": false, 00:13:25.873 "nvme_io": false, 00:13:25.873 "nvme_io_md": false, 00:13:25.873 "write_zeroes": true, 00:13:25.873 "zcopy": true, 00:13:25.873 "get_zone_info": false, 00:13:25.873 "zone_management": false, 00:13:25.873 "zone_append": false, 00:13:25.873 "compare": false, 00:13:25.873 "compare_and_write": false, 00:13:25.873 "abort": true, 00:13:25.873 "seek_hole": false, 00:13:25.873 "seek_data": false, 00:13:25.873 "copy": true, 00:13:25.873 "nvme_iov_md": false 00:13:25.873 }, 00:13:25.873 "memory_domains": [ 00:13:25.873 { 00:13:25.873 "dma_device_id": "system", 00:13:25.873 "dma_device_type": 1 00:13:25.873 }, 00:13:25.873 { 00:13:25.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.873 "dma_device_type": 2 00:13:25.873 } 00:13:25.873 ], 00:13:25.873 "driver_specific": {} 00:13:25.873 } 00:13:25.873 ] 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.873 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.440 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.440 "name": "Existed_Raid", 00:13:26.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.440 "strip_size_kb": 64, 00:13:26.440 "state": "configuring", 00:13:26.440 "raid_level": "raid0", 00:13:26.440 "superblock": false, 00:13:26.440 "num_base_bdevs": 4, 00:13:26.440 "num_base_bdevs_discovered": 3, 00:13:26.440 "num_base_bdevs_operational": 4, 00:13:26.440 "base_bdevs_list": [ 00:13:26.440 { 00:13:26.440 "name": "BaseBdev1", 00:13:26.440 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:26.440 "is_configured": true, 00:13:26.440 "data_offset": 0, 00:13:26.440 "data_size": 65536 00:13:26.440 }, 00:13:26.440 { 00:13:26.440 "name": null, 00:13:26.440 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:26.440 "is_configured": false, 00:13:26.440 "data_offset": 0, 00:13:26.440 "data_size": 65536 00:13:26.440 }, 00:13:26.440 { 00:13:26.440 "name": "BaseBdev3", 00:13:26.440 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:26.440 "is_configured": true, 00:13:26.440 "data_offset": 0, 00:13:26.440 "data_size": 65536 00:13:26.440 }, 00:13:26.440 { 00:13:26.440 "name": "BaseBdev4", 00:13:26.440 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:26.440 "is_configured": true, 00:13:26.440 "data_offset": 0, 00:13:26.440 "data_size": 65536 00:13:26.440 } 00:13:26.440 ] 00:13:26.440 }' 00:13:26.440 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.440 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.699 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.699 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:26.957 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:26.957 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:27.216 [2024-07-15 18:27:19.469752] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.216 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.475 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.475 "name": "Existed_Raid", 00:13:27.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.475 "strip_size_kb": 64, 00:13:27.475 "state": "configuring", 00:13:27.475 "raid_level": "raid0", 00:13:27.475 "superblock": false, 00:13:27.475 "num_base_bdevs": 4, 00:13:27.475 "num_base_bdevs_discovered": 2, 00:13:27.475 "num_base_bdevs_operational": 4, 00:13:27.475 "base_bdevs_list": [ 00:13:27.475 { 00:13:27.475 "name": "BaseBdev1", 00:13:27.475 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:27.475 "is_configured": true, 00:13:27.475 "data_offset": 0, 00:13:27.475 "data_size": 65536 00:13:27.475 }, 00:13:27.475 { 00:13:27.475 "name": null, 00:13:27.475 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:27.475 "is_configured": false, 00:13:27.475 "data_offset": 0, 00:13:27.475 "data_size": 65536 00:13:27.475 }, 00:13:27.475 { 00:13:27.475 "name": null, 00:13:27.475 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:27.475 "is_configured": false, 00:13:27.475 "data_offset": 0, 00:13:27.475 "data_size": 65536 00:13:27.475 }, 00:13:27.475 { 00:13:27.475 "name": "BaseBdev4", 00:13:27.475 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:27.475 "is_configured": true, 00:13:27.475 "data_offset": 0, 00:13:27.475 "data_size": 65536 00:13:27.475 } 00:13:27.475 ] 00:13:27.475 }' 00:13:27.475 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.475 18:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.801 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.801 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.066 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:28.066 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:28.325 [2024-07-15 18:27:20.685859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.325 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.583 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.584 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.844 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.844 "name": "Existed_Raid", 00:13:28.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.844 "strip_size_kb": 64, 00:13:28.844 "state": "configuring", 00:13:28.844 "raid_level": "raid0", 00:13:28.844 "superblock": false, 00:13:28.844 "num_base_bdevs": 4, 00:13:28.844 "num_base_bdevs_discovered": 3, 00:13:28.844 "num_base_bdevs_operational": 4, 00:13:28.844 "base_bdevs_list": [ 00:13:28.844 { 00:13:28.844 "name": "BaseBdev1", 00:13:28.844 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:28.844 "is_configured": true, 00:13:28.844 "data_offset": 0, 00:13:28.844 "data_size": 65536 00:13:28.844 }, 00:13:28.844 { 00:13:28.844 "name": null, 00:13:28.844 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:28.844 "is_configured": false, 00:13:28.844 "data_offset": 0, 00:13:28.844 "data_size": 65536 00:13:28.844 }, 00:13:28.844 { 00:13:28.844 "name": "BaseBdev3", 00:13:28.844 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:28.844 "is_configured": true, 00:13:28.844 "data_offset": 0, 00:13:28.844 "data_size": 65536 00:13:28.844 }, 00:13:28.844 { 00:13:28.844 "name": "BaseBdev4", 00:13:28.844 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:28.844 "is_configured": true, 00:13:28.844 "data_offset": 0, 00:13:28.844 "data_size": 65536 00:13:28.844 } 00:13:28.844 ] 00:13:28.844 }' 00:13:28.844 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.844 18:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.103 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.103 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:29.361 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:29.361 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:29.619 [2024-07-15 18:27:21.865963] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.619 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.878 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.878 "name": "Existed_Raid", 00:13:29.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.878 "strip_size_kb": 64, 00:13:29.878 "state": "configuring", 00:13:29.878 "raid_level": "raid0", 00:13:29.878 "superblock": false, 00:13:29.878 "num_base_bdevs": 4, 00:13:29.878 "num_base_bdevs_discovered": 2, 00:13:29.878 "num_base_bdevs_operational": 4, 00:13:29.878 "base_bdevs_list": [ 00:13:29.878 { 00:13:29.878 "name": null, 00:13:29.878 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:29.878 "is_configured": false, 00:13:29.878 "data_offset": 0, 00:13:29.878 "data_size": 65536 00:13:29.878 }, 00:13:29.878 { 00:13:29.878 "name": null, 00:13:29.878 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:29.878 "is_configured": false, 00:13:29.878 "data_offset": 0, 00:13:29.878 "data_size": 65536 00:13:29.878 }, 00:13:29.878 { 00:13:29.878 "name": "BaseBdev3", 00:13:29.878 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:29.878 "is_configured": true, 00:13:29.878 "data_offset": 0, 00:13:29.878 "data_size": 65536 00:13:29.878 }, 00:13:29.878 { 00:13:29.878 "name": "BaseBdev4", 00:13:29.878 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:29.878 "is_configured": true, 00:13:29.878 "data_offset": 0, 00:13:29.878 "data_size": 65536 00:13:29.878 } 00:13:29.878 ] 00:13:29.878 }' 00:13:29.878 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.878 18:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.138 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.138 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:30.713 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:30.713 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:30.713 [2024-07-15 18:27:23.095838] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.979 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.246 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.246 "name": "Existed_Raid", 00:13:31.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.246 "strip_size_kb": 64, 00:13:31.246 "state": "configuring", 00:13:31.246 "raid_level": "raid0", 00:13:31.246 "superblock": false, 00:13:31.246 "num_base_bdevs": 4, 00:13:31.246 "num_base_bdevs_discovered": 3, 00:13:31.246 "num_base_bdevs_operational": 4, 00:13:31.246 "base_bdevs_list": [ 00:13:31.246 { 00:13:31.246 "name": null, 00:13:31.246 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:31.246 "is_configured": false, 00:13:31.246 "data_offset": 0, 00:13:31.246 "data_size": 65536 00:13:31.246 }, 00:13:31.246 { 00:13:31.246 "name": "BaseBdev2", 00:13:31.246 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:31.246 "is_configured": true, 00:13:31.246 "data_offset": 0, 00:13:31.246 "data_size": 65536 00:13:31.246 }, 00:13:31.246 { 00:13:31.246 "name": "BaseBdev3", 00:13:31.246 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:31.246 "is_configured": true, 00:13:31.246 "data_offset": 0, 00:13:31.246 "data_size": 65536 00:13:31.246 }, 00:13:31.246 { 00:13:31.246 "name": "BaseBdev4", 00:13:31.246 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:31.246 "is_configured": true, 00:13:31.246 "data_offset": 0, 00:13:31.246 "data_size": 65536 00:13:31.246 } 00:13:31.246 ] 00:13:31.246 }' 00:13:31.246 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.246 18:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.514 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.514 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:31.786 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:31.786 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.786 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:32.062 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ddaad206-42d7-11ef-9ade-d5fc5159efa5 00:13:32.325 [2024-07-15 18:27:24.700185] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:32.325 [2024-07-15 18:27:24.700216] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2bb510c34f00 00:13:32.325 [2024-07-15 18:27:24.700221] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:32.325 [2024-07-15 18:27:24.700245] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2bb510c97e20 00:13:32.325 [2024-07-15 18:27:24.700321] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2bb510c34f00 00:13:32.325 [2024-07-15 18:27:24.700326] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2bb510c34f00 00:13:32.325 [2024-07-15 18:27:24.700364] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.325 NewBaseBdev 00:13:32.583 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:32.583 18:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:32.583 18:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:32.583 18:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:32.583 18:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:32.583 18:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:32.583 18:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:32.892 18:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:33.149 [ 00:13:33.149 { 00:13:33.149 "name": "NewBaseBdev", 00:13:33.149 "aliases": [ 00:13:33.149 "ddaad206-42d7-11ef-9ade-d5fc5159efa5" 00:13:33.149 ], 00:13:33.149 "product_name": "Malloc disk", 00:13:33.149 "block_size": 512, 00:13:33.149 "num_blocks": 65536, 00:13:33.149 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:33.149 "assigned_rate_limits": { 00:13:33.149 "rw_ios_per_sec": 0, 00:13:33.149 "rw_mbytes_per_sec": 0, 00:13:33.149 "r_mbytes_per_sec": 0, 00:13:33.149 "w_mbytes_per_sec": 0 00:13:33.149 }, 00:13:33.149 "claimed": true, 00:13:33.149 "claim_type": "exclusive_write", 00:13:33.149 "zoned": false, 00:13:33.149 "supported_io_types": { 00:13:33.149 "read": true, 00:13:33.149 "write": true, 00:13:33.149 "unmap": true, 00:13:33.149 "flush": true, 00:13:33.149 "reset": true, 00:13:33.149 "nvme_admin": false, 00:13:33.149 "nvme_io": false, 00:13:33.149 "nvme_io_md": false, 00:13:33.150 "write_zeroes": true, 00:13:33.150 "zcopy": true, 00:13:33.150 "get_zone_info": false, 00:13:33.150 "zone_management": false, 00:13:33.150 "zone_append": false, 00:13:33.150 "compare": false, 00:13:33.150 "compare_and_write": false, 00:13:33.150 "abort": true, 00:13:33.150 "seek_hole": false, 00:13:33.150 "seek_data": false, 00:13:33.150 "copy": true, 00:13:33.150 "nvme_iov_md": false 00:13:33.150 }, 00:13:33.150 "memory_domains": [ 00:13:33.150 { 00:13:33.150 "dma_device_id": "system", 00:13:33.150 "dma_device_type": 1 00:13:33.150 }, 00:13:33.150 { 00:13:33.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.150 "dma_device_type": 2 00:13:33.150 } 00:13:33.150 ], 00:13:33.150 "driver_specific": {} 00:13:33.150 } 00:13:33.150 ] 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.150 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.408 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.408 "name": "Existed_Raid", 00:13:33.408 "uuid": "e1e426a9-42d7-11ef-9ade-d5fc5159efa5", 00:13:33.408 "strip_size_kb": 64, 00:13:33.408 "state": "online", 00:13:33.408 "raid_level": "raid0", 00:13:33.408 "superblock": false, 00:13:33.408 "num_base_bdevs": 4, 00:13:33.408 "num_base_bdevs_discovered": 4, 00:13:33.408 "num_base_bdevs_operational": 4, 00:13:33.408 "base_bdevs_list": [ 00:13:33.408 { 00:13:33.408 "name": "NewBaseBdev", 00:13:33.408 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:33.408 "is_configured": true, 00:13:33.408 "data_offset": 0, 00:13:33.408 "data_size": 65536 00:13:33.408 }, 00:13:33.408 { 00:13:33.408 "name": "BaseBdev2", 00:13:33.408 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:33.408 "is_configured": true, 00:13:33.408 "data_offset": 0, 00:13:33.408 "data_size": 65536 00:13:33.408 }, 00:13:33.408 { 00:13:33.408 "name": "BaseBdev3", 00:13:33.408 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:33.408 "is_configured": true, 00:13:33.408 "data_offset": 0, 00:13:33.408 "data_size": 65536 00:13:33.408 }, 00:13:33.408 { 00:13:33.408 "name": "BaseBdev4", 00:13:33.408 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:33.408 "is_configured": true, 00:13:33.408 "data_offset": 0, 00:13:33.408 "data_size": 65536 00:13:33.408 } 00:13:33.408 ] 00:13:33.408 }' 00:13:33.408 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.408 18:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.668 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:33.668 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:33.668 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:33.668 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:33.668 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:33.668 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:33.668 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:33.668 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:34.236 [2024-07-15 18:27:26.324225] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.236 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:34.236 "name": "Existed_Raid", 00:13:34.236 "aliases": [ 00:13:34.236 "e1e426a9-42d7-11ef-9ade-d5fc5159efa5" 00:13:34.236 ], 00:13:34.236 "product_name": "Raid Volume", 00:13:34.236 "block_size": 512, 00:13:34.236 "num_blocks": 262144, 00:13:34.236 "uuid": "e1e426a9-42d7-11ef-9ade-d5fc5159efa5", 00:13:34.236 "assigned_rate_limits": { 00:13:34.236 "rw_ios_per_sec": 0, 00:13:34.236 "rw_mbytes_per_sec": 0, 00:13:34.236 "r_mbytes_per_sec": 0, 00:13:34.236 "w_mbytes_per_sec": 0 00:13:34.236 }, 00:13:34.236 "claimed": false, 00:13:34.236 "zoned": false, 00:13:34.236 "supported_io_types": { 00:13:34.236 "read": true, 00:13:34.236 "write": true, 00:13:34.236 "unmap": true, 00:13:34.236 "flush": true, 00:13:34.236 "reset": true, 00:13:34.236 "nvme_admin": false, 00:13:34.236 "nvme_io": false, 00:13:34.236 "nvme_io_md": false, 00:13:34.236 "write_zeroes": true, 00:13:34.236 "zcopy": false, 00:13:34.236 "get_zone_info": false, 00:13:34.236 "zone_management": false, 00:13:34.236 "zone_append": false, 00:13:34.236 "compare": false, 00:13:34.236 "compare_and_write": false, 00:13:34.236 "abort": false, 00:13:34.236 "seek_hole": false, 00:13:34.236 "seek_data": false, 00:13:34.236 "copy": false, 00:13:34.236 "nvme_iov_md": false 00:13:34.236 }, 00:13:34.236 "memory_domains": [ 00:13:34.236 { 00:13:34.236 "dma_device_id": "system", 00:13:34.236 "dma_device_type": 1 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.236 "dma_device_type": 2 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "dma_device_id": "system", 00:13:34.236 "dma_device_type": 1 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.236 "dma_device_type": 2 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "dma_device_id": "system", 00:13:34.236 "dma_device_type": 1 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.236 "dma_device_type": 2 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "dma_device_id": "system", 00:13:34.236 "dma_device_type": 1 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.236 "dma_device_type": 2 00:13:34.236 } 00:13:34.236 ], 00:13:34.236 "driver_specific": { 00:13:34.236 "raid": { 00:13:34.236 "uuid": "e1e426a9-42d7-11ef-9ade-d5fc5159efa5", 00:13:34.236 "strip_size_kb": 64, 00:13:34.236 "state": "online", 00:13:34.236 "raid_level": "raid0", 00:13:34.236 "superblock": false, 00:13:34.236 "num_base_bdevs": 4, 00:13:34.236 "num_base_bdevs_discovered": 4, 00:13:34.236 "num_base_bdevs_operational": 4, 00:13:34.236 "base_bdevs_list": [ 00:13:34.236 { 00:13:34.236 "name": "NewBaseBdev", 00:13:34.236 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:34.236 "is_configured": true, 00:13:34.236 "data_offset": 0, 00:13:34.236 "data_size": 65536 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "name": "BaseBdev2", 00:13:34.236 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:34.236 "is_configured": true, 00:13:34.236 "data_offset": 0, 00:13:34.236 "data_size": 65536 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "name": "BaseBdev3", 00:13:34.236 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:34.236 "is_configured": true, 00:13:34.236 "data_offset": 0, 00:13:34.236 "data_size": 65536 00:13:34.236 }, 00:13:34.236 { 00:13:34.236 "name": "BaseBdev4", 00:13:34.236 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:34.236 "is_configured": true, 00:13:34.236 "data_offset": 0, 00:13:34.236 "data_size": 65536 00:13:34.236 } 00:13:34.236 ] 00:13:34.236 } 00:13:34.236 } 00:13:34.236 }' 00:13:34.236 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.236 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:34.236 BaseBdev2 00:13:34.236 BaseBdev3 00:13:34.236 BaseBdev4' 00:13:34.236 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.236 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:34.236 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.236 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.236 "name": "NewBaseBdev", 00:13:34.236 "aliases": [ 00:13:34.236 "ddaad206-42d7-11ef-9ade-d5fc5159efa5" 00:13:34.236 ], 00:13:34.236 "product_name": "Malloc disk", 00:13:34.236 "block_size": 512, 00:13:34.236 "num_blocks": 65536, 00:13:34.236 "uuid": "ddaad206-42d7-11ef-9ade-d5fc5159efa5", 00:13:34.236 "assigned_rate_limits": { 00:13:34.236 "rw_ios_per_sec": 0, 00:13:34.236 "rw_mbytes_per_sec": 0, 00:13:34.236 "r_mbytes_per_sec": 0, 00:13:34.236 "w_mbytes_per_sec": 0 00:13:34.236 }, 00:13:34.236 "claimed": true, 00:13:34.236 "claim_type": "exclusive_write", 00:13:34.236 "zoned": false, 00:13:34.236 "supported_io_types": { 00:13:34.236 "read": true, 00:13:34.236 "write": true, 00:13:34.236 "unmap": true, 00:13:34.236 "flush": true, 00:13:34.236 "reset": true, 00:13:34.236 "nvme_admin": false, 00:13:34.236 "nvme_io": false, 00:13:34.236 "nvme_io_md": false, 00:13:34.236 "write_zeroes": true, 00:13:34.237 "zcopy": true, 00:13:34.237 "get_zone_info": false, 00:13:34.237 "zone_management": false, 00:13:34.237 "zone_append": false, 00:13:34.237 "compare": false, 00:13:34.237 "compare_and_write": false, 00:13:34.237 "abort": true, 00:13:34.237 "seek_hole": false, 00:13:34.237 "seek_data": false, 00:13:34.237 "copy": true, 00:13:34.237 "nvme_iov_md": false 00:13:34.237 }, 00:13:34.237 "memory_domains": [ 00:13:34.237 { 00:13:34.237 "dma_device_id": "system", 00:13:34.237 "dma_device_type": 1 00:13:34.237 }, 00:13:34.237 { 00:13:34.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.237 "dma_device_type": 2 00:13:34.237 } 00:13:34.237 ], 00:13:34.237 "driver_specific": {} 00:13:34.237 }' 00:13:34.237 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.237 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.237 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.237 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:34.495 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.754 "name": "BaseBdev2", 00:13:34.754 "aliases": [ 00:13:34.754 "daefea72-42d7-11ef-9ade-d5fc5159efa5" 00:13:34.754 ], 00:13:34.754 "product_name": "Malloc disk", 00:13:34.754 "block_size": 512, 00:13:34.754 "num_blocks": 65536, 00:13:34.754 "uuid": "daefea72-42d7-11ef-9ade-d5fc5159efa5", 00:13:34.754 "assigned_rate_limits": { 00:13:34.754 "rw_ios_per_sec": 0, 00:13:34.754 "rw_mbytes_per_sec": 0, 00:13:34.754 "r_mbytes_per_sec": 0, 00:13:34.754 "w_mbytes_per_sec": 0 00:13:34.754 }, 00:13:34.754 "claimed": true, 00:13:34.754 "claim_type": "exclusive_write", 00:13:34.754 "zoned": false, 00:13:34.754 "supported_io_types": { 00:13:34.754 "read": true, 00:13:34.754 "write": true, 00:13:34.754 "unmap": true, 00:13:34.754 "flush": true, 00:13:34.754 "reset": true, 00:13:34.754 "nvme_admin": false, 00:13:34.754 "nvme_io": false, 00:13:34.754 "nvme_io_md": false, 00:13:34.754 "write_zeroes": true, 00:13:34.754 "zcopy": true, 00:13:34.754 "get_zone_info": false, 00:13:34.754 "zone_management": false, 00:13:34.754 "zone_append": false, 00:13:34.754 "compare": false, 00:13:34.754 "compare_and_write": false, 00:13:34.754 "abort": true, 00:13:34.754 "seek_hole": false, 00:13:34.754 "seek_data": false, 00:13:34.754 "copy": true, 00:13:34.754 "nvme_iov_md": false 00:13:34.754 }, 00:13:34.754 "memory_domains": [ 00:13:34.754 { 00:13:34.754 "dma_device_id": "system", 00:13:34.754 "dma_device_type": 1 00:13:34.754 }, 00:13:34.754 { 00:13:34.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.754 "dma_device_type": 2 00:13:34.754 } 00:13:34.754 ], 00:13:34.754 "driver_specific": {} 00:13:34.754 }' 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:34.754 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:35.013 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:35.013 "name": "BaseBdev3", 00:13:35.013 "aliases": [ 00:13:35.014 "db696223-42d7-11ef-9ade-d5fc5159efa5" 00:13:35.014 ], 00:13:35.014 "product_name": "Malloc disk", 00:13:35.014 "block_size": 512, 00:13:35.014 "num_blocks": 65536, 00:13:35.014 "uuid": "db696223-42d7-11ef-9ade-d5fc5159efa5", 00:13:35.014 "assigned_rate_limits": { 00:13:35.014 "rw_ios_per_sec": 0, 00:13:35.014 "rw_mbytes_per_sec": 0, 00:13:35.014 "r_mbytes_per_sec": 0, 00:13:35.014 "w_mbytes_per_sec": 0 00:13:35.014 }, 00:13:35.014 "claimed": true, 00:13:35.014 "claim_type": "exclusive_write", 00:13:35.014 "zoned": false, 00:13:35.014 "supported_io_types": { 00:13:35.014 "read": true, 00:13:35.014 "write": true, 00:13:35.014 "unmap": true, 00:13:35.014 "flush": true, 00:13:35.014 "reset": true, 00:13:35.014 "nvme_admin": false, 00:13:35.014 "nvme_io": false, 00:13:35.014 "nvme_io_md": false, 00:13:35.014 "write_zeroes": true, 00:13:35.014 "zcopy": true, 00:13:35.014 "get_zone_info": false, 00:13:35.014 "zone_management": false, 00:13:35.014 "zone_append": false, 00:13:35.014 "compare": false, 00:13:35.014 "compare_and_write": false, 00:13:35.014 "abort": true, 00:13:35.014 "seek_hole": false, 00:13:35.014 "seek_data": false, 00:13:35.014 "copy": true, 00:13:35.014 "nvme_iov_md": false 00:13:35.014 }, 00:13:35.014 "memory_domains": [ 00:13:35.014 { 00:13:35.014 "dma_device_id": "system", 00:13:35.014 "dma_device_type": 1 00:13:35.014 }, 00:13:35.014 { 00:13:35.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.014 "dma_device_type": 2 00:13:35.014 } 00:13:35.014 ], 00:13:35.014 "driver_specific": {} 00:13:35.014 }' 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:35.014 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:35.275 "name": "BaseBdev4", 00:13:35.275 "aliases": [ 00:13:35.275 "dbe2da14-42d7-11ef-9ade-d5fc5159efa5" 00:13:35.275 ], 00:13:35.275 "product_name": "Malloc disk", 00:13:35.275 "block_size": 512, 00:13:35.275 "num_blocks": 65536, 00:13:35.275 "uuid": "dbe2da14-42d7-11ef-9ade-d5fc5159efa5", 00:13:35.275 "assigned_rate_limits": { 00:13:35.275 "rw_ios_per_sec": 0, 00:13:35.275 "rw_mbytes_per_sec": 0, 00:13:35.275 "r_mbytes_per_sec": 0, 00:13:35.275 "w_mbytes_per_sec": 0 00:13:35.275 }, 00:13:35.275 "claimed": true, 00:13:35.275 "claim_type": "exclusive_write", 00:13:35.275 "zoned": false, 00:13:35.275 "supported_io_types": { 00:13:35.275 "read": true, 00:13:35.275 "write": true, 00:13:35.275 "unmap": true, 00:13:35.275 "flush": true, 00:13:35.275 "reset": true, 00:13:35.275 "nvme_admin": false, 00:13:35.275 "nvme_io": false, 00:13:35.275 "nvme_io_md": false, 00:13:35.275 "write_zeroes": true, 00:13:35.275 "zcopy": true, 00:13:35.275 "get_zone_info": false, 00:13:35.275 "zone_management": false, 00:13:35.275 "zone_append": false, 00:13:35.275 "compare": false, 00:13:35.275 "compare_and_write": false, 00:13:35.275 "abort": true, 00:13:35.275 "seek_hole": false, 00:13:35.275 "seek_data": false, 00:13:35.275 "copy": true, 00:13:35.275 "nvme_iov_md": false 00:13:35.275 }, 00:13:35.275 "memory_domains": [ 00:13:35.275 { 00:13:35.275 "dma_device_id": "system", 00:13:35.275 "dma_device_type": 1 00:13:35.275 }, 00:13:35.275 { 00:13:35.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.275 "dma_device_type": 2 00:13:35.275 } 00:13:35.275 ], 00:13:35.275 "driver_specific": {} 00:13:35.275 }' 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:35.275 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:35.532 [2024-07-15 18:27:27.820289] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.532 [2024-07-15 18:27:27.820317] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.532 [2024-07-15 18:27:27.820341] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.532 [2024-07-15 18:27:27.820356] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.532 [2024-07-15 18:27:27.820360] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2bb510c34f00 name Existed_Raid, state offline 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 58407 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 58407 ']' 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 58407 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 58407 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:35.532 killing process with pid 58407 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58407' 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 58407 00:13:35.532 [2024-07-15 18:27:27.849108] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.532 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 58407 00:13:35.532 [2024-07-15 18:27:27.871900] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.790 18:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:35.790 00:13:35.791 real 0m29.166s 00:13:35.791 user 0m53.468s 00:13:35.791 sys 0m3.960s 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.791 ************************************ 00:13:35.791 END TEST raid_state_function_test 00:13:35.791 ************************************ 00:13:35.791 18:27:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:35.791 18:27:28 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:35.791 18:27:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:35.791 18:27:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.791 18:27:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:35.791 ************************************ 00:13:35.791 START TEST raid_state_function_test_sb 00:13:35.791 ************************************ 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=59234 00:13:35.791 Process raid pid: 59234 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59234' 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 59234 /var/tmp/spdk-raid.sock 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 59234 ']' 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.791 18:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.791 [2024-07-15 18:27:28.153599] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:13:35.791 [2024-07-15 18:27:28.153839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:36.395 EAL: TSC is not safe to use in SMP mode 00:13:36.395 EAL: TSC is not invariant 00:13:36.395 [2024-07-15 18:27:28.760548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.653 [2024-07-15 18:27:28.867801] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:36.653 [2024-07-15 18:27:28.870004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.653 [2024-07-15 18:27:28.870800] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.653 [2024-07-15 18:27:28.870815] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.910 18:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.910 18:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:36.910 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:37.167 [2024-07-15 18:27:29.414408] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:37.167 [2024-07-15 18:27:29.414471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:37.167 [2024-07-15 18:27:29.414477] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:37.167 [2024-07-15 18:27:29.414486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:37.167 [2024-07-15 18:27:29.414490] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:37.167 [2024-07-15 18:27:29.414498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:37.167 [2024-07-15 18:27:29.414501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:37.167 [2024-07-15 18:27:29.414508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.167 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.168 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.168 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.168 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.425 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.425 "name": "Existed_Raid", 00:13:37.425 "uuid": "e4b37a02-42d7-11ef-9ade-d5fc5159efa5", 00:13:37.425 "strip_size_kb": 64, 00:13:37.425 "state": "configuring", 00:13:37.425 "raid_level": "raid0", 00:13:37.425 "superblock": true, 00:13:37.425 "num_base_bdevs": 4, 00:13:37.425 "num_base_bdevs_discovered": 0, 00:13:37.425 "num_base_bdevs_operational": 4, 00:13:37.425 "base_bdevs_list": [ 00:13:37.425 { 00:13:37.425 "name": "BaseBdev1", 00:13:37.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.425 "is_configured": false, 00:13:37.425 "data_offset": 0, 00:13:37.425 "data_size": 0 00:13:37.425 }, 00:13:37.425 { 00:13:37.425 "name": "BaseBdev2", 00:13:37.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.425 "is_configured": false, 00:13:37.425 "data_offset": 0, 00:13:37.425 "data_size": 0 00:13:37.425 }, 00:13:37.425 { 00:13:37.425 "name": "BaseBdev3", 00:13:37.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.425 "is_configured": false, 00:13:37.425 "data_offset": 0, 00:13:37.425 "data_size": 0 00:13:37.425 }, 00:13:37.425 { 00:13:37.425 "name": "BaseBdev4", 00:13:37.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.425 "is_configured": false, 00:13:37.425 "data_offset": 0, 00:13:37.425 "data_size": 0 00:13:37.425 } 00:13:37.425 ] 00:13:37.425 }' 00:13:37.425 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.425 18:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.684 18:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:37.942 [2024-07-15 18:27:30.302454] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.942 [2024-07-15 18:27:30.302490] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb3d63c34500 name Existed_Raid, state configuring 00:13:37.942 18:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:38.508 [2024-07-15 18:27:30.590483] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.508 [2024-07-15 18:27:30.590558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.508 [2024-07-15 18:27:30.590564] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.508 [2024-07-15 18:27:30.590573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.508 [2024-07-15 18:27:30.590577] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:38.508 [2024-07-15 18:27:30.590584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:38.508 [2024-07-15 18:27:30.590588] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:38.508 [2024-07-15 18:27:30.590595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:38.508 18:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:38.508 [2024-07-15 18:27:30.835604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.508 BaseBdev1 00:13:38.508 18:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:38.508 18:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:38.508 18:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:38.508 18:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:38.508 18:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:38.508 18:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:38.508 18:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:38.767 18:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.027 [ 00:13:39.027 { 00:13:39.027 "name": "BaseBdev1", 00:13:39.027 "aliases": [ 00:13:39.027 "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5" 00:13:39.027 ], 00:13:39.027 "product_name": "Malloc disk", 00:13:39.027 "block_size": 512, 00:13:39.027 "num_blocks": 65536, 00:13:39.027 "uuid": "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5", 00:13:39.027 "assigned_rate_limits": { 00:13:39.027 "rw_ios_per_sec": 0, 00:13:39.027 "rw_mbytes_per_sec": 0, 00:13:39.027 "r_mbytes_per_sec": 0, 00:13:39.027 "w_mbytes_per_sec": 0 00:13:39.027 }, 00:13:39.027 "claimed": true, 00:13:39.027 "claim_type": "exclusive_write", 00:13:39.027 "zoned": false, 00:13:39.027 "supported_io_types": { 00:13:39.027 "read": true, 00:13:39.027 "write": true, 00:13:39.027 "unmap": true, 00:13:39.027 "flush": true, 00:13:39.027 "reset": true, 00:13:39.027 "nvme_admin": false, 00:13:39.027 "nvme_io": false, 00:13:39.027 "nvme_io_md": false, 00:13:39.027 "write_zeroes": true, 00:13:39.027 "zcopy": true, 00:13:39.027 "get_zone_info": false, 00:13:39.027 "zone_management": false, 00:13:39.027 "zone_append": false, 00:13:39.027 "compare": false, 00:13:39.027 "compare_and_write": false, 00:13:39.027 "abort": true, 00:13:39.027 "seek_hole": false, 00:13:39.027 "seek_data": false, 00:13:39.027 "copy": true, 00:13:39.027 "nvme_iov_md": false 00:13:39.027 }, 00:13:39.027 "memory_domains": [ 00:13:39.027 { 00:13:39.027 "dma_device_id": "system", 00:13:39.027 "dma_device_type": 1 00:13:39.027 }, 00:13:39.027 { 00:13:39.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.027 "dma_device_type": 2 00:13:39.027 } 00:13:39.027 ], 00:13:39.027 "driver_specific": {} 00:13:39.027 } 00:13:39.027 ] 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.027 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.286 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.286 "name": "Existed_Raid", 00:13:39.286 "uuid": "e566ee76-42d7-11ef-9ade-d5fc5159efa5", 00:13:39.286 "strip_size_kb": 64, 00:13:39.286 "state": "configuring", 00:13:39.286 "raid_level": "raid0", 00:13:39.286 "superblock": true, 00:13:39.286 "num_base_bdevs": 4, 00:13:39.286 "num_base_bdevs_discovered": 1, 00:13:39.286 "num_base_bdevs_operational": 4, 00:13:39.286 "base_bdevs_list": [ 00:13:39.286 { 00:13:39.286 "name": "BaseBdev1", 00:13:39.286 "uuid": "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5", 00:13:39.286 "is_configured": true, 00:13:39.286 "data_offset": 2048, 00:13:39.286 "data_size": 63488 00:13:39.286 }, 00:13:39.286 { 00:13:39.286 "name": "BaseBdev2", 00:13:39.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.286 "is_configured": false, 00:13:39.286 "data_offset": 0, 00:13:39.286 "data_size": 0 00:13:39.286 }, 00:13:39.286 { 00:13:39.286 "name": "BaseBdev3", 00:13:39.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.286 "is_configured": false, 00:13:39.286 "data_offset": 0, 00:13:39.286 "data_size": 0 00:13:39.286 }, 00:13:39.286 { 00:13:39.286 "name": "BaseBdev4", 00:13:39.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.286 "is_configured": false, 00:13:39.286 "data_offset": 0, 00:13:39.286 "data_size": 0 00:13:39.286 } 00:13:39.286 ] 00:13:39.286 }' 00:13:39.286 18:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.286 18:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.854 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:40.113 [2024-07-15 18:27:32.330612] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.113 [2024-07-15 18:27:32.330648] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb3d63c34500 name Existed_Raid, state configuring 00:13:40.113 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:40.372 [2024-07-15 18:27:32.634674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.372 [2024-07-15 18:27:32.635556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.372 [2024-07-15 18:27:32.635597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.372 [2024-07-15 18:27:32.635602] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.372 [2024-07-15 18:27:32.635611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.372 [2024-07-15 18:27:32.635614] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:40.372 [2024-07-15 18:27:32.635622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.372 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.631 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.631 "name": "Existed_Raid", 00:13:40.631 "uuid": "e69ed98a-42d7-11ef-9ade-d5fc5159efa5", 00:13:40.631 "strip_size_kb": 64, 00:13:40.631 "state": "configuring", 00:13:40.631 "raid_level": "raid0", 00:13:40.631 "superblock": true, 00:13:40.631 "num_base_bdevs": 4, 00:13:40.631 "num_base_bdevs_discovered": 1, 00:13:40.631 "num_base_bdevs_operational": 4, 00:13:40.631 "base_bdevs_list": [ 00:13:40.631 { 00:13:40.631 "name": "BaseBdev1", 00:13:40.631 "uuid": "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5", 00:13:40.631 "is_configured": true, 00:13:40.631 "data_offset": 2048, 00:13:40.631 "data_size": 63488 00:13:40.631 }, 00:13:40.631 { 00:13:40.631 "name": "BaseBdev2", 00:13:40.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.631 "is_configured": false, 00:13:40.631 "data_offset": 0, 00:13:40.631 "data_size": 0 00:13:40.631 }, 00:13:40.631 { 00:13:40.631 "name": "BaseBdev3", 00:13:40.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.631 "is_configured": false, 00:13:40.631 "data_offset": 0, 00:13:40.631 "data_size": 0 00:13:40.631 }, 00:13:40.631 { 00:13:40.631 "name": "BaseBdev4", 00:13:40.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.631 "is_configured": false, 00:13:40.631 "data_offset": 0, 00:13:40.631 "data_size": 0 00:13:40.631 } 00:13:40.631 ] 00:13:40.631 }' 00:13:40.631 18:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.631 18:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.197 18:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:41.455 [2024-07-15 18:27:33.654912] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.455 BaseBdev2 00:13:41.455 18:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:41.455 18:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:41.455 18:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:41.455 18:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:41.455 18:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:41.455 18:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:41.455 18:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:41.713 18:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:41.972 [ 00:13:41.972 { 00:13:41.972 "name": "BaseBdev2", 00:13:41.972 "aliases": [ 00:13:41.972 "e73a80cb-42d7-11ef-9ade-d5fc5159efa5" 00:13:41.972 ], 00:13:41.972 "product_name": "Malloc disk", 00:13:41.972 "block_size": 512, 00:13:41.972 "num_blocks": 65536, 00:13:41.972 "uuid": "e73a80cb-42d7-11ef-9ade-d5fc5159efa5", 00:13:41.972 "assigned_rate_limits": { 00:13:41.972 "rw_ios_per_sec": 0, 00:13:41.972 "rw_mbytes_per_sec": 0, 00:13:41.972 "r_mbytes_per_sec": 0, 00:13:41.972 "w_mbytes_per_sec": 0 00:13:41.972 }, 00:13:41.972 "claimed": true, 00:13:41.972 "claim_type": "exclusive_write", 00:13:41.972 "zoned": false, 00:13:41.972 "supported_io_types": { 00:13:41.972 "read": true, 00:13:41.972 "write": true, 00:13:41.972 "unmap": true, 00:13:41.972 "flush": true, 00:13:41.972 "reset": true, 00:13:41.972 "nvme_admin": false, 00:13:41.972 "nvme_io": false, 00:13:41.972 "nvme_io_md": false, 00:13:41.972 "write_zeroes": true, 00:13:41.972 "zcopy": true, 00:13:41.972 "get_zone_info": false, 00:13:41.972 "zone_management": false, 00:13:41.972 "zone_append": false, 00:13:41.972 "compare": false, 00:13:41.972 "compare_and_write": false, 00:13:41.972 "abort": true, 00:13:41.972 "seek_hole": false, 00:13:41.972 "seek_data": false, 00:13:41.972 "copy": true, 00:13:41.972 "nvme_iov_md": false 00:13:41.972 }, 00:13:41.972 "memory_domains": [ 00:13:41.972 { 00:13:41.972 "dma_device_id": "system", 00:13:41.972 "dma_device_type": 1 00:13:41.972 }, 00:13:41.972 { 00:13:41.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.972 "dma_device_type": 2 00:13:41.972 } 00:13:41.972 ], 00:13:41.972 "driver_specific": {} 00:13:41.972 } 00:13:41.972 ] 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.972 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.231 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.231 "name": "Existed_Raid", 00:13:42.231 "uuid": "e69ed98a-42d7-11ef-9ade-d5fc5159efa5", 00:13:42.231 "strip_size_kb": 64, 00:13:42.231 "state": "configuring", 00:13:42.231 "raid_level": "raid0", 00:13:42.231 "superblock": true, 00:13:42.231 "num_base_bdevs": 4, 00:13:42.231 "num_base_bdevs_discovered": 2, 00:13:42.231 "num_base_bdevs_operational": 4, 00:13:42.231 "base_bdevs_list": [ 00:13:42.231 { 00:13:42.231 "name": "BaseBdev1", 00:13:42.231 "uuid": "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5", 00:13:42.231 "is_configured": true, 00:13:42.231 "data_offset": 2048, 00:13:42.231 "data_size": 63488 00:13:42.231 }, 00:13:42.231 { 00:13:42.231 "name": "BaseBdev2", 00:13:42.231 "uuid": "e73a80cb-42d7-11ef-9ade-d5fc5159efa5", 00:13:42.231 "is_configured": true, 00:13:42.231 "data_offset": 2048, 00:13:42.231 "data_size": 63488 00:13:42.231 }, 00:13:42.231 { 00:13:42.231 "name": "BaseBdev3", 00:13:42.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.231 "is_configured": false, 00:13:42.231 "data_offset": 0, 00:13:42.231 "data_size": 0 00:13:42.231 }, 00:13:42.231 { 00:13:42.231 "name": "BaseBdev4", 00:13:42.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.231 "is_configured": false, 00:13:42.231 "data_offset": 0, 00:13:42.231 "data_size": 0 00:13:42.231 } 00:13:42.231 ] 00:13:42.231 }' 00:13:42.231 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.231 18:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.488 18:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.747 [2024-07-15 18:27:35.071011] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.747 BaseBdev3 00:13:42.747 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:42.747 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:42.747 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:42.747 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:42.747 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:42.747 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:42.747 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:43.005 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:43.284 [ 00:13:43.284 { 00:13:43.284 "name": "BaseBdev3", 00:13:43.284 "aliases": [ 00:13:43.284 "e8129635-42d7-11ef-9ade-d5fc5159efa5" 00:13:43.284 ], 00:13:43.284 "product_name": "Malloc disk", 00:13:43.284 "block_size": 512, 00:13:43.284 "num_blocks": 65536, 00:13:43.284 "uuid": "e8129635-42d7-11ef-9ade-d5fc5159efa5", 00:13:43.284 "assigned_rate_limits": { 00:13:43.284 "rw_ios_per_sec": 0, 00:13:43.284 "rw_mbytes_per_sec": 0, 00:13:43.284 "r_mbytes_per_sec": 0, 00:13:43.284 "w_mbytes_per_sec": 0 00:13:43.284 }, 00:13:43.284 "claimed": true, 00:13:43.284 "claim_type": "exclusive_write", 00:13:43.284 "zoned": false, 00:13:43.284 "supported_io_types": { 00:13:43.284 "read": true, 00:13:43.284 "write": true, 00:13:43.284 "unmap": true, 00:13:43.284 "flush": true, 00:13:43.284 "reset": true, 00:13:43.284 "nvme_admin": false, 00:13:43.284 "nvme_io": false, 00:13:43.284 "nvme_io_md": false, 00:13:43.284 "write_zeroes": true, 00:13:43.284 "zcopy": true, 00:13:43.284 "get_zone_info": false, 00:13:43.284 "zone_management": false, 00:13:43.284 "zone_append": false, 00:13:43.284 "compare": false, 00:13:43.284 "compare_and_write": false, 00:13:43.284 "abort": true, 00:13:43.284 "seek_hole": false, 00:13:43.284 "seek_data": false, 00:13:43.284 "copy": true, 00:13:43.284 "nvme_iov_md": false 00:13:43.284 }, 00:13:43.284 "memory_domains": [ 00:13:43.284 { 00:13:43.284 "dma_device_id": "system", 00:13:43.284 "dma_device_type": 1 00:13:43.284 }, 00:13:43.284 { 00:13:43.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.284 "dma_device_type": 2 00:13:43.284 } 00:13:43.284 ], 00:13:43.284 "driver_specific": {} 00:13:43.284 } 00:13:43.284 ] 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.284 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.543 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.543 "name": "Existed_Raid", 00:13:43.543 "uuid": "e69ed98a-42d7-11ef-9ade-d5fc5159efa5", 00:13:43.543 "strip_size_kb": 64, 00:13:43.543 "state": "configuring", 00:13:43.543 "raid_level": "raid0", 00:13:43.543 "superblock": true, 00:13:43.543 "num_base_bdevs": 4, 00:13:43.543 "num_base_bdevs_discovered": 3, 00:13:43.543 "num_base_bdevs_operational": 4, 00:13:43.543 "base_bdevs_list": [ 00:13:43.543 { 00:13:43.543 "name": "BaseBdev1", 00:13:43.543 "uuid": "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5", 00:13:43.543 "is_configured": true, 00:13:43.543 "data_offset": 2048, 00:13:43.543 "data_size": 63488 00:13:43.543 }, 00:13:43.543 { 00:13:43.543 "name": "BaseBdev2", 00:13:43.543 "uuid": "e73a80cb-42d7-11ef-9ade-d5fc5159efa5", 00:13:43.543 "is_configured": true, 00:13:43.543 "data_offset": 2048, 00:13:43.543 "data_size": 63488 00:13:43.543 }, 00:13:43.543 { 00:13:43.543 "name": "BaseBdev3", 00:13:43.543 "uuid": "e8129635-42d7-11ef-9ade-d5fc5159efa5", 00:13:43.543 "is_configured": true, 00:13:43.543 "data_offset": 2048, 00:13:43.543 "data_size": 63488 00:13:43.543 }, 00:13:43.543 { 00:13:43.543 "name": "BaseBdev4", 00:13:43.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.543 "is_configured": false, 00:13:43.543 "data_offset": 0, 00:13:43.543 "data_size": 0 00:13:43.543 } 00:13:43.543 ] 00:13:43.543 }' 00:13:43.543 18:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.543 18:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.803 18:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:44.370 [2024-07-15 18:27:36.467196] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:44.370 [2024-07-15 18:27:36.467285] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb3d63c34a00 00:13:44.370 [2024-07-15 18:27:36.467293] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:44.370 [2024-07-15 18:27:36.467316] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb3d63c97e20 00:13:44.370 [2024-07-15 18:27:36.467401] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb3d63c34a00 00:13:44.370 [2024-07-15 18:27:36.467406] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb3d63c34a00 00:13:44.370 [2024-07-15 18:27:36.467431] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.370 BaseBdev4 00:13:44.370 18:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:44.370 18:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:44.370 18:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:44.370 18:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:44.370 18:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:44.370 18:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:44.370 18:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:44.629 18:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:44.887 [ 00:13:44.887 { 00:13:44.887 "name": "BaseBdev4", 00:13:44.887 "aliases": [ 00:13:44.887 "e8e79e81-42d7-11ef-9ade-d5fc5159efa5" 00:13:44.887 ], 00:13:44.887 "product_name": "Malloc disk", 00:13:44.887 "block_size": 512, 00:13:44.887 "num_blocks": 65536, 00:13:44.887 "uuid": "e8e79e81-42d7-11ef-9ade-d5fc5159efa5", 00:13:44.887 "assigned_rate_limits": { 00:13:44.887 "rw_ios_per_sec": 0, 00:13:44.887 "rw_mbytes_per_sec": 0, 00:13:44.887 "r_mbytes_per_sec": 0, 00:13:44.887 "w_mbytes_per_sec": 0 00:13:44.887 }, 00:13:44.887 "claimed": true, 00:13:44.887 "claim_type": "exclusive_write", 00:13:44.887 "zoned": false, 00:13:44.887 "supported_io_types": { 00:13:44.887 "read": true, 00:13:44.887 "write": true, 00:13:44.887 "unmap": true, 00:13:44.887 "flush": true, 00:13:44.887 "reset": true, 00:13:44.887 "nvme_admin": false, 00:13:44.887 "nvme_io": false, 00:13:44.887 "nvme_io_md": false, 00:13:44.887 "write_zeroes": true, 00:13:44.887 "zcopy": true, 00:13:44.887 "get_zone_info": false, 00:13:44.887 "zone_management": false, 00:13:44.887 "zone_append": false, 00:13:44.887 "compare": false, 00:13:44.887 "compare_and_write": false, 00:13:44.887 "abort": true, 00:13:44.887 "seek_hole": false, 00:13:44.887 "seek_data": false, 00:13:44.887 "copy": true, 00:13:44.887 "nvme_iov_md": false 00:13:44.887 }, 00:13:44.887 "memory_domains": [ 00:13:44.887 { 00:13:44.887 "dma_device_id": "system", 00:13:44.887 "dma_device_type": 1 00:13:44.887 }, 00:13:44.887 { 00:13:44.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.887 "dma_device_type": 2 00:13:44.887 } 00:13:44.887 ], 00:13:44.887 "driver_specific": {} 00:13:44.887 } 00:13:44.887 ] 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:44.887 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:44.888 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:44.888 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:44.888 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.888 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.146 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.146 "name": "Existed_Raid", 00:13:45.146 "uuid": "e69ed98a-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.146 "strip_size_kb": 64, 00:13:45.146 "state": "online", 00:13:45.146 "raid_level": "raid0", 00:13:45.146 "superblock": true, 00:13:45.146 "num_base_bdevs": 4, 00:13:45.146 "num_base_bdevs_discovered": 4, 00:13:45.146 "num_base_bdevs_operational": 4, 00:13:45.146 "base_bdevs_list": [ 00:13:45.146 { 00:13:45.146 "name": "BaseBdev1", 00:13:45.146 "uuid": "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.146 "is_configured": true, 00:13:45.146 "data_offset": 2048, 00:13:45.146 "data_size": 63488 00:13:45.146 }, 00:13:45.146 { 00:13:45.146 "name": "BaseBdev2", 00:13:45.146 "uuid": "e73a80cb-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.146 "is_configured": true, 00:13:45.146 "data_offset": 2048, 00:13:45.146 "data_size": 63488 00:13:45.146 }, 00:13:45.146 { 00:13:45.146 "name": "BaseBdev3", 00:13:45.146 "uuid": "e8129635-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.146 "is_configured": true, 00:13:45.146 "data_offset": 2048, 00:13:45.146 "data_size": 63488 00:13:45.146 }, 00:13:45.146 { 00:13:45.146 "name": "BaseBdev4", 00:13:45.146 "uuid": "e8e79e81-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.146 "is_configured": true, 00:13:45.146 "data_offset": 2048, 00:13:45.146 "data_size": 63488 00:13:45.146 } 00:13:45.146 ] 00:13:45.146 }' 00:13:45.146 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.146 18:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.405 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:45.405 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:45.405 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:45.405 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:45.405 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:45.405 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:45.405 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:45.405 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:45.674 [2024-07-15 18:27:37.895139] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.674 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:45.674 "name": "Existed_Raid", 00:13:45.674 "aliases": [ 00:13:45.674 "e69ed98a-42d7-11ef-9ade-d5fc5159efa5" 00:13:45.674 ], 00:13:45.674 "product_name": "Raid Volume", 00:13:45.674 "block_size": 512, 00:13:45.674 "num_blocks": 253952, 00:13:45.674 "uuid": "e69ed98a-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.674 "assigned_rate_limits": { 00:13:45.674 "rw_ios_per_sec": 0, 00:13:45.674 "rw_mbytes_per_sec": 0, 00:13:45.674 "r_mbytes_per_sec": 0, 00:13:45.674 "w_mbytes_per_sec": 0 00:13:45.674 }, 00:13:45.674 "claimed": false, 00:13:45.674 "zoned": false, 00:13:45.674 "supported_io_types": { 00:13:45.674 "read": true, 00:13:45.674 "write": true, 00:13:45.674 "unmap": true, 00:13:45.674 "flush": true, 00:13:45.674 "reset": true, 00:13:45.674 "nvme_admin": false, 00:13:45.674 "nvme_io": false, 00:13:45.674 "nvme_io_md": false, 00:13:45.674 "write_zeroes": true, 00:13:45.674 "zcopy": false, 00:13:45.674 "get_zone_info": false, 00:13:45.674 "zone_management": false, 00:13:45.674 "zone_append": false, 00:13:45.674 "compare": false, 00:13:45.674 "compare_and_write": false, 00:13:45.674 "abort": false, 00:13:45.674 "seek_hole": false, 00:13:45.674 "seek_data": false, 00:13:45.674 "copy": false, 00:13:45.674 "nvme_iov_md": false 00:13:45.674 }, 00:13:45.674 "memory_domains": [ 00:13:45.674 { 00:13:45.674 "dma_device_id": "system", 00:13:45.674 "dma_device_type": 1 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.674 "dma_device_type": 2 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "dma_device_id": "system", 00:13:45.674 "dma_device_type": 1 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.674 "dma_device_type": 2 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "dma_device_id": "system", 00:13:45.674 "dma_device_type": 1 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.674 "dma_device_type": 2 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "dma_device_id": "system", 00:13:45.674 "dma_device_type": 1 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.674 "dma_device_type": 2 00:13:45.674 } 00:13:45.674 ], 00:13:45.674 "driver_specific": { 00:13:45.674 "raid": { 00:13:45.674 "uuid": "e69ed98a-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.674 "strip_size_kb": 64, 00:13:45.674 "state": "online", 00:13:45.674 "raid_level": "raid0", 00:13:45.674 "superblock": true, 00:13:45.674 "num_base_bdevs": 4, 00:13:45.674 "num_base_bdevs_discovered": 4, 00:13:45.674 "num_base_bdevs_operational": 4, 00:13:45.674 "base_bdevs_list": [ 00:13:45.674 { 00:13:45.674 "name": "BaseBdev1", 00:13:45.674 "uuid": "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.674 "is_configured": true, 00:13:45.674 "data_offset": 2048, 00:13:45.674 "data_size": 63488 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "name": "BaseBdev2", 00:13:45.674 "uuid": "e73a80cb-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.674 "is_configured": true, 00:13:45.674 "data_offset": 2048, 00:13:45.674 "data_size": 63488 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "name": "BaseBdev3", 00:13:45.674 "uuid": "e8129635-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.674 "is_configured": true, 00:13:45.674 "data_offset": 2048, 00:13:45.674 "data_size": 63488 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "name": "BaseBdev4", 00:13:45.674 "uuid": "e8e79e81-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.674 "is_configured": true, 00:13:45.674 "data_offset": 2048, 00:13:45.674 "data_size": 63488 00:13:45.674 } 00:13:45.674 ] 00:13:45.674 } 00:13:45.674 } 00:13:45.674 }' 00:13:45.674 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:45.674 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:45.674 BaseBdev2 00:13:45.674 BaseBdev3 00:13:45.674 BaseBdev4' 00:13:45.674 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:45.674 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:45.674 18:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:45.936 "name": "BaseBdev1", 00:13:45.936 "aliases": [ 00:13:45.936 "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5" 00:13:45.936 ], 00:13:45.936 "product_name": "Malloc disk", 00:13:45.936 "block_size": 512, 00:13:45.936 "num_blocks": 65536, 00:13:45.936 "uuid": "e58c2ae7-42d7-11ef-9ade-d5fc5159efa5", 00:13:45.936 "assigned_rate_limits": { 00:13:45.936 "rw_ios_per_sec": 0, 00:13:45.936 "rw_mbytes_per_sec": 0, 00:13:45.936 "r_mbytes_per_sec": 0, 00:13:45.936 "w_mbytes_per_sec": 0 00:13:45.936 }, 00:13:45.936 "claimed": true, 00:13:45.936 "claim_type": "exclusive_write", 00:13:45.936 "zoned": false, 00:13:45.936 "supported_io_types": { 00:13:45.936 "read": true, 00:13:45.936 "write": true, 00:13:45.936 "unmap": true, 00:13:45.936 "flush": true, 00:13:45.936 "reset": true, 00:13:45.936 "nvme_admin": false, 00:13:45.936 "nvme_io": false, 00:13:45.936 "nvme_io_md": false, 00:13:45.936 "write_zeroes": true, 00:13:45.936 "zcopy": true, 00:13:45.936 "get_zone_info": false, 00:13:45.936 "zone_management": false, 00:13:45.936 "zone_append": false, 00:13:45.936 "compare": false, 00:13:45.936 "compare_and_write": false, 00:13:45.936 "abort": true, 00:13:45.936 "seek_hole": false, 00:13:45.936 "seek_data": false, 00:13:45.936 "copy": true, 00:13:45.936 "nvme_iov_md": false 00:13:45.936 }, 00:13:45.936 "memory_domains": [ 00:13:45.936 { 00:13:45.936 "dma_device_id": "system", 00:13:45.936 "dma_device_type": 1 00:13:45.936 }, 00:13:45.936 { 00:13:45.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.936 "dma_device_type": 2 00:13:45.936 } 00:13:45.936 ], 00:13:45.936 "driver_specific": {} 00:13:45.936 }' 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:45.936 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:46.195 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:46.195 "name": "BaseBdev2", 00:13:46.195 "aliases": [ 00:13:46.195 "e73a80cb-42d7-11ef-9ade-d5fc5159efa5" 00:13:46.195 ], 00:13:46.195 "product_name": "Malloc disk", 00:13:46.195 "block_size": 512, 00:13:46.195 "num_blocks": 65536, 00:13:46.195 "uuid": "e73a80cb-42d7-11ef-9ade-d5fc5159efa5", 00:13:46.195 "assigned_rate_limits": { 00:13:46.195 "rw_ios_per_sec": 0, 00:13:46.195 "rw_mbytes_per_sec": 0, 00:13:46.195 "r_mbytes_per_sec": 0, 00:13:46.195 "w_mbytes_per_sec": 0 00:13:46.195 }, 00:13:46.195 "claimed": true, 00:13:46.195 "claim_type": "exclusive_write", 00:13:46.195 "zoned": false, 00:13:46.195 "supported_io_types": { 00:13:46.195 "read": true, 00:13:46.195 "write": true, 00:13:46.195 "unmap": true, 00:13:46.195 "flush": true, 00:13:46.195 "reset": true, 00:13:46.195 "nvme_admin": false, 00:13:46.195 "nvme_io": false, 00:13:46.195 "nvme_io_md": false, 00:13:46.195 "write_zeroes": true, 00:13:46.195 "zcopy": true, 00:13:46.195 "get_zone_info": false, 00:13:46.195 "zone_management": false, 00:13:46.195 "zone_append": false, 00:13:46.195 "compare": false, 00:13:46.195 "compare_and_write": false, 00:13:46.195 "abort": true, 00:13:46.195 "seek_hole": false, 00:13:46.195 "seek_data": false, 00:13:46.195 "copy": true, 00:13:46.195 "nvme_iov_md": false 00:13:46.195 }, 00:13:46.195 "memory_domains": [ 00:13:46.195 { 00:13:46.195 "dma_device_id": "system", 00:13:46.195 "dma_device_type": 1 00:13:46.195 }, 00:13:46.195 { 00:13:46.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.195 "dma_device_type": 2 00:13:46.195 } 00:13:46.195 ], 00:13:46.195 "driver_specific": {} 00:13:46.195 }' 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:46.455 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:46.715 "name": "BaseBdev3", 00:13:46.715 "aliases": [ 00:13:46.715 "e8129635-42d7-11ef-9ade-d5fc5159efa5" 00:13:46.715 ], 00:13:46.715 "product_name": "Malloc disk", 00:13:46.715 "block_size": 512, 00:13:46.715 "num_blocks": 65536, 00:13:46.715 "uuid": "e8129635-42d7-11ef-9ade-d5fc5159efa5", 00:13:46.715 "assigned_rate_limits": { 00:13:46.715 "rw_ios_per_sec": 0, 00:13:46.715 "rw_mbytes_per_sec": 0, 00:13:46.715 "r_mbytes_per_sec": 0, 00:13:46.715 "w_mbytes_per_sec": 0 00:13:46.715 }, 00:13:46.715 "claimed": true, 00:13:46.715 "claim_type": "exclusive_write", 00:13:46.715 "zoned": false, 00:13:46.715 "supported_io_types": { 00:13:46.715 "read": true, 00:13:46.715 "write": true, 00:13:46.715 "unmap": true, 00:13:46.715 "flush": true, 00:13:46.715 "reset": true, 00:13:46.715 "nvme_admin": false, 00:13:46.715 "nvme_io": false, 00:13:46.715 "nvme_io_md": false, 00:13:46.715 "write_zeroes": true, 00:13:46.715 "zcopy": true, 00:13:46.715 "get_zone_info": false, 00:13:46.715 "zone_management": false, 00:13:46.715 "zone_append": false, 00:13:46.715 "compare": false, 00:13:46.715 "compare_and_write": false, 00:13:46.715 "abort": true, 00:13:46.715 "seek_hole": false, 00:13:46.715 "seek_data": false, 00:13:46.715 "copy": true, 00:13:46.715 "nvme_iov_md": false 00:13:46.715 }, 00:13:46.715 "memory_domains": [ 00:13:46.715 { 00:13:46.715 "dma_device_id": "system", 00:13:46.715 "dma_device_type": 1 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.715 "dma_device_type": 2 00:13:46.715 } 00:13:46.715 ], 00:13:46.715 "driver_specific": {} 00:13:46.715 }' 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:46.715 18:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:46.974 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:46.975 "name": "BaseBdev4", 00:13:46.975 "aliases": [ 00:13:46.975 "e8e79e81-42d7-11ef-9ade-d5fc5159efa5" 00:13:46.975 ], 00:13:46.975 "product_name": "Malloc disk", 00:13:46.975 "block_size": 512, 00:13:46.975 "num_blocks": 65536, 00:13:46.975 "uuid": "e8e79e81-42d7-11ef-9ade-d5fc5159efa5", 00:13:46.975 "assigned_rate_limits": { 00:13:46.975 "rw_ios_per_sec": 0, 00:13:46.975 "rw_mbytes_per_sec": 0, 00:13:46.975 "r_mbytes_per_sec": 0, 00:13:46.975 "w_mbytes_per_sec": 0 00:13:46.975 }, 00:13:46.975 "claimed": true, 00:13:46.975 "claim_type": "exclusive_write", 00:13:46.975 "zoned": false, 00:13:46.975 "supported_io_types": { 00:13:46.975 "read": true, 00:13:46.975 "write": true, 00:13:46.975 "unmap": true, 00:13:46.975 "flush": true, 00:13:46.975 "reset": true, 00:13:46.975 "nvme_admin": false, 00:13:46.975 "nvme_io": false, 00:13:46.975 "nvme_io_md": false, 00:13:46.975 "write_zeroes": true, 00:13:46.975 "zcopy": true, 00:13:46.975 "get_zone_info": false, 00:13:46.975 "zone_management": false, 00:13:46.975 "zone_append": false, 00:13:46.975 "compare": false, 00:13:46.975 "compare_and_write": false, 00:13:46.975 "abort": true, 00:13:46.975 "seek_hole": false, 00:13:46.975 "seek_data": false, 00:13:46.975 "copy": true, 00:13:46.975 "nvme_iov_md": false 00:13:46.975 }, 00:13:46.975 "memory_domains": [ 00:13:46.975 { 00:13:46.975 "dma_device_id": "system", 00:13:46.975 "dma_device_type": 1 00:13:46.975 }, 00:13:46.975 { 00:13:46.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.975 "dma_device_type": 2 00:13:46.975 } 00:13:46.975 ], 00:13:46.975 "driver_specific": {} 00:13:46.975 }' 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:46.975 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:47.234 [2024-07-15 18:27:39.587241] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.234 [2024-07-15 18:27:39.587268] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.234 [2024-07-15 18:27:39.587284] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.234 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.802 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.802 "name": "Existed_Raid", 00:13:47.802 "uuid": "e69ed98a-42d7-11ef-9ade-d5fc5159efa5", 00:13:47.802 "strip_size_kb": 64, 00:13:47.802 "state": "offline", 00:13:47.802 "raid_level": "raid0", 00:13:47.802 "superblock": true, 00:13:47.802 "num_base_bdevs": 4, 00:13:47.802 "num_base_bdevs_discovered": 3, 00:13:47.802 "num_base_bdevs_operational": 3, 00:13:47.802 "base_bdevs_list": [ 00:13:47.802 { 00:13:47.802 "name": null, 00:13:47.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.802 "is_configured": false, 00:13:47.802 "data_offset": 2048, 00:13:47.802 "data_size": 63488 00:13:47.802 }, 00:13:47.802 { 00:13:47.802 "name": "BaseBdev2", 00:13:47.802 "uuid": "e73a80cb-42d7-11ef-9ade-d5fc5159efa5", 00:13:47.802 "is_configured": true, 00:13:47.802 "data_offset": 2048, 00:13:47.802 "data_size": 63488 00:13:47.802 }, 00:13:47.802 { 00:13:47.802 "name": "BaseBdev3", 00:13:47.802 "uuid": "e8129635-42d7-11ef-9ade-d5fc5159efa5", 00:13:47.802 "is_configured": true, 00:13:47.802 "data_offset": 2048, 00:13:47.802 "data_size": 63488 00:13:47.802 }, 00:13:47.802 { 00:13:47.802 "name": "BaseBdev4", 00:13:47.802 "uuid": "e8e79e81-42d7-11ef-9ade-d5fc5159efa5", 00:13:47.802 "is_configured": true, 00:13:47.802 "data_offset": 2048, 00:13:47.802 "data_size": 63488 00:13:47.802 } 00:13:47.802 ] 00:13:47.802 }' 00:13:47.802 18:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.802 18:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.061 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:48.061 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:48.061 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.061 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:48.320 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:48.320 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:48.320 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:48.578 [2024-07-15 18:27:40.741215] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:48.578 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:48.578 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:48.578 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.578 18:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:48.854 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:48.854 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:48.854 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:49.112 [2024-07-15 18:27:41.309672] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:49.112 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:49.112 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:49.112 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.112 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:49.371 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:49.371 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.371 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:49.630 [2024-07-15 18:27:41.802571] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:49.630 [2024-07-15 18:27:41.802608] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb3d63c34a00 name Existed_Raid, state offline 00:13:49.630 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:49.630 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:49.630 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:49.630 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.888 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:49.888 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:49.888 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:49.888 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:49.888 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:49.888 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.147 BaseBdev2 00:13:50.147 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:50.147 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:50.147 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:50.147 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:50.147 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:50.147 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:50.147 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:50.405 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.405 [ 00:13:50.405 { 00:13:50.405 "name": "BaseBdev2", 00:13:50.405 "aliases": [ 00:13:50.405 "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5" 00:13:50.405 ], 00:13:50.405 "product_name": "Malloc disk", 00:13:50.405 "block_size": 512, 00:13:50.405 "num_blocks": 65536, 00:13:50.405 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:50.405 "assigned_rate_limits": { 00:13:50.405 "rw_ios_per_sec": 0, 00:13:50.405 "rw_mbytes_per_sec": 0, 00:13:50.405 "r_mbytes_per_sec": 0, 00:13:50.405 "w_mbytes_per_sec": 0 00:13:50.405 }, 00:13:50.405 "claimed": false, 00:13:50.405 "zoned": false, 00:13:50.405 "supported_io_types": { 00:13:50.405 "read": true, 00:13:50.405 "write": true, 00:13:50.405 "unmap": true, 00:13:50.405 "flush": true, 00:13:50.405 "reset": true, 00:13:50.405 "nvme_admin": false, 00:13:50.405 "nvme_io": false, 00:13:50.405 "nvme_io_md": false, 00:13:50.405 "write_zeroes": true, 00:13:50.405 "zcopy": true, 00:13:50.405 "get_zone_info": false, 00:13:50.405 "zone_management": false, 00:13:50.405 "zone_append": false, 00:13:50.405 "compare": false, 00:13:50.405 "compare_and_write": false, 00:13:50.405 "abort": true, 00:13:50.405 "seek_hole": false, 00:13:50.405 "seek_data": false, 00:13:50.405 "copy": true, 00:13:50.405 "nvme_iov_md": false 00:13:50.405 }, 00:13:50.405 "memory_domains": [ 00:13:50.405 { 00:13:50.405 "dma_device_id": "system", 00:13:50.405 "dma_device_type": 1 00:13:50.405 }, 00:13:50.405 { 00:13:50.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.405 "dma_device_type": 2 00:13:50.405 } 00:13:50.405 ], 00:13:50.405 "driver_specific": {} 00:13:50.405 } 00:13:50.405 ] 00:13:50.405 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:50.405 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:50.405 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:50.405 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:50.663 BaseBdev3 00:13:50.663 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:50.663 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:50.663 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:50.663 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:50.663 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:50.663 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:50.663 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:50.921 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:51.179 [ 00:13:51.179 { 00:13:51.179 "name": "BaseBdev3", 00:13:51.179 "aliases": [ 00:13:51.179 "ecce64c6-42d7-11ef-9ade-d5fc5159efa5" 00:13:51.179 ], 00:13:51.179 "product_name": "Malloc disk", 00:13:51.179 "block_size": 512, 00:13:51.179 "num_blocks": 65536, 00:13:51.179 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:13:51.179 "assigned_rate_limits": { 00:13:51.179 "rw_ios_per_sec": 0, 00:13:51.179 "rw_mbytes_per_sec": 0, 00:13:51.179 "r_mbytes_per_sec": 0, 00:13:51.179 "w_mbytes_per_sec": 0 00:13:51.179 }, 00:13:51.179 "claimed": false, 00:13:51.179 "zoned": false, 00:13:51.179 "supported_io_types": { 00:13:51.179 "read": true, 00:13:51.179 "write": true, 00:13:51.179 "unmap": true, 00:13:51.179 "flush": true, 00:13:51.179 "reset": true, 00:13:51.179 "nvme_admin": false, 00:13:51.179 "nvme_io": false, 00:13:51.179 "nvme_io_md": false, 00:13:51.179 "write_zeroes": true, 00:13:51.179 "zcopy": true, 00:13:51.179 "get_zone_info": false, 00:13:51.179 "zone_management": false, 00:13:51.179 "zone_append": false, 00:13:51.179 "compare": false, 00:13:51.179 "compare_and_write": false, 00:13:51.179 "abort": true, 00:13:51.179 "seek_hole": false, 00:13:51.179 "seek_data": false, 00:13:51.179 "copy": true, 00:13:51.179 "nvme_iov_md": false 00:13:51.179 }, 00:13:51.179 "memory_domains": [ 00:13:51.179 { 00:13:51.179 "dma_device_id": "system", 00:13:51.179 "dma_device_type": 1 00:13:51.179 }, 00:13:51.179 { 00:13:51.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.179 "dma_device_type": 2 00:13:51.179 } 00:13:51.179 ], 00:13:51.179 "driver_specific": {} 00:13:51.179 } 00:13:51.179 ] 00:13:51.179 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:51.179 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:51.179 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:51.179 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:51.451 BaseBdev4 00:13:51.451 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:51.451 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:51.451 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:51.451 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:51.451 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:51.451 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:51.451 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.734 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:51.992 [ 00:13:51.992 { 00:13:51.992 "name": "BaseBdev4", 00:13:51.992 "aliases": [ 00:13:51.992 "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5" 00:13:51.992 ], 00:13:51.992 "product_name": "Malloc disk", 00:13:51.992 "block_size": 512, 00:13:51.992 "num_blocks": 65536, 00:13:51.992 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:13:51.992 "assigned_rate_limits": { 00:13:51.992 "rw_ios_per_sec": 0, 00:13:51.992 "rw_mbytes_per_sec": 0, 00:13:51.992 "r_mbytes_per_sec": 0, 00:13:51.992 "w_mbytes_per_sec": 0 00:13:51.992 }, 00:13:51.992 "claimed": false, 00:13:51.992 "zoned": false, 00:13:51.992 "supported_io_types": { 00:13:51.992 "read": true, 00:13:51.992 "write": true, 00:13:51.992 "unmap": true, 00:13:51.992 "flush": true, 00:13:51.992 "reset": true, 00:13:51.992 "nvme_admin": false, 00:13:51.992 "nvme_io": false, 00:13:51.992 "nvme_io_md": false, 00:13:51.992 "write_zeroes": true, 00:13:51.992 "zcopy": true, 00:13:51.992 "get_zone_info": false, 00:13:51.992 "zone_management": false, 00:13:51.992 "zone_append": false, 00:13:51.992 "compare": false, 00:13:51.992 "compare_and_write": false, 00:13:51.992 "abort": true, 00:13:51.992 "seek_hole": false, 00:13:51.992 "seek_data": false, 00:13:51.992 "copy": true, 00:13:51.992 "nvme_iov_md": false 00:13:51.992 }, 00:13:51.992 "memory_domains": [ 00:13:51.992 { 00:13:51.992 "dma_device_id": "system", 00:13:51.992 "dma_device_type": 1 00:13:51.992 }, 00:13:51.992 { 00:13:51.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.992 "dma_device_type": 2 00:13:51.992 } 00:13:51.992 ], 00:13:51.992 "driver_specific": {} 00:13:51.992 } 00:13:51.992 ] 00:13:51.992 18:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:51.992 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:51.992 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:51.992 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:52.251 [2024-07-15 18:27:44.536686] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.251 [2024-07-15 18:27:44.536742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.251 [2024-07-15 18:27:44.536753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.251 [2024-07-15 18:27:44.537461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.251 [2024-07-15 18:27:44.537480] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.251 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.252 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.252 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.252 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.509 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:52.509 "name": "Existed_Raid", 00:13:52.509 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:13:52.509 "strip_size_kb": 64, 00:13:52.509 "state": "configuring", 00:13:52.509 "raid_level": "raid0", 00:13:52.509 "superblock": true, 00:13:52.509 "num_base_bdevs": 4, 00:13:52.509 "num_base_bdevs_discovered": 3, 00:13:52.509 "num_base_bdevs_operational": 4, 00:13:52.509 "base_bdevs_list": [ 00:13:52.509 { 00:13:52.509 "name": "BaseBdev1", 00:13:52.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.509 "is_configured": false, 00:13:52.509 "data_offset": 0, 00:13:52.509 "data_size": 0 00:13:52.509 }, 00:13:52.509 { 00:13:52.509 "name": "BaseBdev2", 00:13:52.509 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:52.509 "is_configured": true, 00:13:52.509 "data_offset": 2048, 00:13:52.509 "data_size": 63488 00:13:52.509 }, 00:13:52.509 { 00:13:52.509 "name": "BaseBdev3", 00:13:52.509 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:13:52.509 "is_configured": true, 00:13:52.509 "data_offset": 2048, 00:13:52.509 "data_size": 63488 00:13:52.509 }, 00:13:52.509 { 00:13:52.509 "name": "BaseBdev4", 00:13:52.509 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:13:52.509 "is_configured": true, 00:13:52.509 "data_offset": 2048, 00:13:52.509 "data_size": 63488 00:13:52.509 } 00:13:52.509 ] 00:13:52.509 }' 00:13:52.509 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:52.509 18:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.767 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:53.026 [2024-07-15 18:27:45.372742] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.026 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.283 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.283 "name": "Existed_Raid", 00:13:53.283 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:13:53.283 "strip_size_kb": 64, 00:13:53.283 "state": "configuring", 00:13:53.283 "raid_level": "raid0", 00:13:53.283 "superblock": true, 00:13:53.283 "num_base_bdevs": 4, 00:13:53.283 "num_base_bdevs_discovered": 2, 00:13:53.283 "num_base_bdevs_operational": 4, 00:13:53.283 "base_bdevs_list": [ 00:13:53.283 { 00:13:53.283 "name": "BaseBdev1", 00:13:53.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.283 "is_configured": false, 00:13:53.283 "data_offset": 0, 00:13:53.283 "data_size": 0 00:13:53.283 }, 00:13:53.283 { 00:13:53.283 "name": null, 00:13:53.283 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:53.283 "is_configured": false, 00:13:53.283 "data_offset": 2048, 00:13:53.283 "data_size": 63488 00:13:53.283 }, 00:13:53.283 { 00:13:53.283 "name": "BaseBdev3", 00:13:53.283 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:13:53.283 "is_configured": true, 00:13:53.283 "data_offset": 2048, 00:13:53.283 "data_size": 63488 00:13:53.283 }, 00:13:53.283 { 00:13:53.283 "name": "BaseBdev4", 00:13:53.283 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:13:53.283 "is_configured": true, 00:13:53.283 "data_offset": 2048, 00:13:53.283 "data_size": 63488 00:13:53.283 } 00:13:53.283 ] 00:13:53.283 }' 00:13:53.283 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.283 18:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.849 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.849 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.108 [2024-07-15 18:27:46.460971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.108 BaseBdev1 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:54.108 18:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.673 18:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.673 [ 00:13:54.673 { 00:13:54.673 "name": "BaseBdev1", 00:13:54.673 "aliases": [ 00:13:54.673 "eedc8e85-42d7-11ef-9ade-d5fc5159efa5" 00:13:54.673 ], 00:13:54.673 "product_name": "Malloc disk", 00:13:54.673 "block_size": 512, 00:13:54.673 "num_blocks": 65536, 00:13:54.673 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:13:54.673 "assigned_rate_limits": { 00:13:54.673 "rw_ios_per_sec": 0, 00:13:54.673 "rw_mbytes_per_sec": 0, 00:13:54.673 "r_mbytes_per_sec": 0, 00:13:54.673 "w_mbytes_per_sec": 0 00:13:54.673 }, 00:13:54.673 "claimed": true, 00:13:54.673 "claim_type": "exclusive_write", 00:13:54.673 "zoned": false, 00:13:54.673 "supported_io_types": { 00:13:54.673 "read": true, 00:13:54.673 "write": true, 00:13:54.673 "unmap": true, 00:13:54.673 "flush": true, 00:13:54.673 "reset": true, 00:13:54.673 "nvme_admin": false, 00:13:54.673 "nvme_io": false, 00:13:54.673 "nvme_io_md": false, 00:13:54.673 "write_zeroes": true, 00:13:54.673 "zcopy": true, 00:13:54.673 "get_zone_info": false, 00:13:54.673 "zone_management": false, 00:13:54.673 "zone_append": false, 00:13:54.673 "compare": false, 00:13:54.673 "compare_and_write": false, 00:13:54.673 "abort": true, 00:13:54.673 "seek_hole": false, 00:13:54.673 "seek_data": false, 00:13:54.673 "copy": true, 00:13:54.673 "nvme_iov_md": false 00:13:54.673 }, 00:13:54.673 "memory_domains": [ 00:13:54.673 { 00:13:54.673 "dma_device_id": "system", 00:13:54.673 "dma_device_type": 1 00:13:54.673 }, 00:13:54.673 { 00:13:54.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.673 "dma_device_type": 2 00:13:54.673 } 00:13:54.673 ], 00:13:54.673 "driver_specific": {} 00:13:54.673 } 00:13:54.673 ] 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.673 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.249 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.249 "name": "Existed_Raid", 00:13:55.249 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:13:55.249 "strip_size_kb": 64, 00:13:55.249 "state": "configuring", 00:13:55.249 "raid_level": "raid0", 00:13:55.249 "superblock": true, 00:13:55.249 "num_base_bdevs": 4, 00:13:55.249 "num_base_bdevs_discovered": 3, 00:13:55.249 "num_base_bdevs_operational": 4, 00:13:55.249 "base_bdevs_list": [ 00:13:55.249 { 00:13:55.249 "name": "BaseBdev1", 00:13:55.249 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:13:55.249 "is_configured": true, 00:13:55.249 "data_offset": 2048, 00:13:55.249 "data_size": 63488 00:13:55.249 }, 00:13:55.249 { 00:13:55.249 "name": null, 00:13:55.249 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:55.249 "is_configured": false, 00:13:55.249 "data_offset": 2048, 00:13:55.249 "data_size": 63488 00:13:55.249 }, 00:13:55.249 { 00:13:55.249 "name": "BaseBdev3", 00:13:55.249 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:13:55.249 "is_configured": true, 00:13:55.249 "data_offset": 2048, 00:13:55.249 "data_size": 63488 00:13:55.249 }, 00:13:55.249 { 00:13:55.249 "name": "BaseBdev4", 00:13:55.249 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:13:55.249 "is_configured": true, 00:13:55.249 "data_offset": 2048, 00:13:55.249 "data_size": 63488 00:13:55.249 } 00:13:55.249 ] 00:13:55.249 }' 00:13:55.249 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.249 18:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.530 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.788 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:55.788 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:56.045 [2024-07-15 18:27:48.236964] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.045 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.302 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.302 "name": "Existed_Raid", 00:13:56.302 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:13:56.302 "strip_size_kb": 64, 00:13:56.302 "state": "configuring", 00:13:56.302 "raid_level": "raid0", 00:13:56.302 "superblock": true, 00:13:56.302 "num_base_bdevs": 4, 00:13:56.302 "num_base_bdevs_discovered": 2, 00:13:56.302 "num_base_bdevs_operational": 4, 00:13:56.302 "base_bdevs_list": [ 00:13:56.302 { 00:13:56.302 "name": "BaseBdev1", 00:13:56.302 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:13:56.302 "is_configured": true, 00:13:56.302 "data_offset": 2048, 00:13:56.302 "data_size": 63488 00:13:56.302 }, 00:13:56.303 { 00:13:56.303 "name": null, 00:13:56.303 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:56.303 "is_configured": false, 00:13:56.303 "data_offset": 2048, 00:13:56.303 "data_size": 63488 00:13:56.303 }, 00:13:56.303 { 00:13:56.303 "name": null, 00:13:56.303 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:13:56.303 "is_configured": false, 00:13:56.303 "data_offset": 2048, 00:13:56.303 "data_size": 63488 00:13:56.303 }, 00:13:56.303 { 00:13:56.303 "name": "BaseBdev4", 00:13:56.303 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:13:56.303 "is_configured": true, 00:13:56.303 "data_offset": 2048, 00:13:56.303 "data_size": 63488 00:13:56.303 } 00:13:56.303 ] 00:13:56.303 }' 00:13:56.303 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.303 18:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.562 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.562 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:56.821 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:56.821 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.078 [2024-07-15 18:27:49.321052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.078 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:57.078 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.078 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.078 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.078 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.078 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:57.078 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.079 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.079 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.079 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.079 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.079 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.336 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.336 "name": "Existed_Raid", 00:13:57.336 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:13:57.336 "strip_size_kb": 64, 00:13:57.336 "state": "configuring", 00:13:57.336 "raid_level": "raid0", 00:13:57.336 "superblock": true, 00:13:57.336 "num_base_bdevs": 4, 00:13:57.336 "num_base_bdevs_discovered": 3, 00:13:57.336 "num_base_bdevs_operational": 4, 00:13:57.336 "base_bdevs_list": [ 00:13:57.336 { 00:13:57.336 "name": "BaseBdev1", 00:13:57.336 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:13:57.336 "is_configured": true, 00:13:57.336 "data_offset": 2048, 00:13:57.336 "data_size": 63488 00:13:57.336 }, 00:13:57.336 { 00:13:57.336 "name": null, 00:13:57.336 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:57.336 "is_configured": false, 00:13:57.336 "data_offset": 2048, 00:13:57.336 "data_size": 63488 00:13:57.336 }, 00:13:57.336 { 00:13:57.336 "name": "BaseBdev3", 00:13:57.336 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:13:57.336 "is_configured": true, 00:13:57.336 "data_offset": 2048, 00:13:57.336 "data_size": 63488 00:13:57.336 }, 00:13:57.336 { 00:13:57.336 "name": "BaseBdev4", 00:13:57.336 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:13:57.336 "is_configured": true, 00:13:57.336 "data_offset": 2048, 00:13:57.336 "data_size": 63488 00:13:57.336 } 00:13:57.336 ] 00:13:57.336 }' 00:13:57.336 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.336 18:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.902 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.903 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:58.161 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:58.161 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:58.418 [2024-07-15 18:27:50.573154] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.418 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.676 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.676 "name": "Existed_Raid", 00:13:58.676 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:13:58.676 "strip_size_kb": 64, 00:13:58.676 "state": "configuring", 00:13:58.676 "raid_level": "raid0", 00:13:58.676 "superblock": true, 00:13:58.676 "num_base_bdevs": 4, 00:13:58.676 "num_base_bdevs_discovered": 2, 00:13:58.676 "num_base_bdevs_operational": 4, 00:13:58.676 "base_bdevs_list": [ 00:13:58.676 { 00:13:58.676 "name": null, 00:13:58.676 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:13:58.676 "is_configured": false, 00:13:58.676 "data_offset": 2048, 00:13:58.676 "data_size": 63488 00:13:58.676 }, 00:13:58.676 { 00:13:58.676 "name": null, 00:13:58.676 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:58.676 "is_configured": false, 00:13:58.676 "data_offset": 2048, 00:13:58.676 "data_size": 63488 00:13:58.676 }, 00:13:58.676 { 00:13:58.676 "name": "BaseBdev3", 00:13:58.676 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:13:58.676 "is_configured": true, 00:13:58.676 "data_offset": 2048, 00:13:58.676 "data_size": 63488 00:13:58.676 }, 00:13:58.676 { 00:13:58.676 "name": "BaseBdev4", 00:13:58.676 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:13:58.676 "is_configured": true, 00:13:58.676 "data_offset": 2048, 00:13:58.676 "data_size": 63488 00:13:58.676 } 00:13:58.676 ] 00:13:58.676 }' 00:13:58.676 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.676 18:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.934 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.934 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.192 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:59.192 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:59.450 [2024-07-15 18:27:51.674982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.450 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.709 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.709 "name": "Existed_Raid", 00:13:59.709 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:13:59.709 "strip_size_kb": 64, 00:13:59.709 "state": "configuring", 00:13:59.709 "raid_level": "raid0", 00:13:59.709 "superblock": true, 00:13:59.709 "num_base_bdevs": 4, 00:13:59.709 "num_base_bdevs_discovered": 3, 00:13:59.709 "num_base_bdevs_operational": 4, 00:13:59.709 "base_bdevs_list": [ 00:13:59.709 { 00:13:59.709 "name": null, 00:13:59.709 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:13:59.709 "is_configured": false, 00:13:59.709 "data_offset": 2048, 00:13:59.709 "data_size": 63488 00:13:59.709 }, 00:13:59.709 { 00:13:59.709 "name": "BaseBdev2", 00:13:59.709 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:13:59.709 "is_configured": true, 00:13:59.709 "data_offset": 2048, 00:13:59.709 "data_size": 63488 00:13:59.709 }, 00:13:59.709 { 00:13:59.709 "name": "BaseBdev3", 00:13:59.709 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:13:59.709 "is_configured": true, 00:13:59.709 "data_offset": 2048, 00:13:59.709 "data_size": 63488 00:13:59.709 }, 00:13:59.709 { 00:13:59.709 "name": "BaseBdev4", 00:13:59.709 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:13:59.709 "is_configured": true, 00:13:59.709 "data_offset": 2048, 00:13:59.709 "data_size": 63488 00:13:59.709 } 00:13:59.709 ] 00:13:59.709 }' 00:13:59.709 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.709 18:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.023 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.024 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:00.282 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:00.282 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:00.282 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.541 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u eedc8e85-42d7-11ef-9ade-d5fc5159efa5 00:14:00.800 [2024-07-15 18:27:53.103227] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:00.800 [2024-07-15 18:27:53.103293] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb3d63c34f00 00:14:00.800 [2024-07-15 18:27:53.103299] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:00.800 [2024-07-15 18:27:53.103322] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb3d63c97e20 00:14:00.800 [2024-07-15 18:27:53.103382] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb3d63c34f00 00:14:00.800 [2024-07-15 18:27:53.103388] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb3d63c34f00 00:14:00.800 [2024-07-15 18:27:53.103421] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.800 NewBaseBdev 00:14:00.800 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:00.800 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:00.800 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:00.800 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:00.800 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:00.800 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:00.800 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.058 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.317 [ 00:14:01.317 { 00:14:01.317 "name": "NewBaseBdev", 00:14:01.317 "aliases": [ 00:14:01.317 "eedc8e85-42d7-11ef-9ade-d5fc5159efa5" 00:14:01.317 ], 00:14:01.317 "product_name": "Malloc disk", 00:14:01.317 "block_size": 512, 00:14:01.317 "num_blocks": 65536, 00:14:01.317 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:14:01.317 "assigned_rate_limits": { 00:14:01.317 "rw_ios_per_sec": 0, 00:14:01.317 "rw_mbytes_per_sec": 0, 00:14:01.317 "r_mbytes_per_sec": 0, 00:14:01.317 "w_mbytes_per_sec": 0 00:14:01.317 }, 00:14:01.317 "claimed": true, 00:14:01.317 "claim_type": "exclusive_write", 00:14:01.317 "zoned": false, 00:14:01.317 "supported_io_types": { 00:14:01.317 "read": true, 00:14:01.317 "write": true, 00:14:01.317 "unmap": true, 00:14:01.317 "flush": true, 00:14:01.317 "reset": true, 00:14:01.317 "nvme_admin": false, 00:14:01.317 "nvme_io": false, 00:14:01.317 "nvme_io_md": false, 00:14:01.317 "write_zeroes": true, 00:14:01.317 "zcopy": true, 00:14:01.317 "get_zone_info": false, 00:14:01.317 "zone_management": false, 00:14:01.317 "zone_append": false, 00:14:01.317 "compare": false, 00:14:01.317 "compare_and_write": false, 00:14:01.317 "abort": true, 00:14:01.317 "seek_hole": false, 00:14:01.317 "seek_data": false, 00:14:01.317 "copy": true, 00:14:01.317 "nvme_iov_md": false 00:14:01.317 }, 00:14:01.317 "memory_domains": [ 00:14:01.317 { 00:14:01.317 "dma_device_id": "system", 00:14:01.317 "dma_device_type": 1 00:14:01.317 }, 00:14:01.317 { 00:14:01.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.317 "dma_device_type": 2 00:14:01.317 } 00:14:01.317 ], 00:14:01.317 "driver_specific": {} 00:14:01.317 } 00:14:01.317 ] 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.317 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.575 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.575 "name": "Existed_Raid", 00:14:01.575 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:14:01.575 "strip_size_kb": 64, 00:14:01.575 "state": "online", 00:14:01.575 "raid_level": "raid0", 00:14:01.575 "superblock": true, 00:14:01.575 "num_base_bdevs": 4, 00:14:01.575 "num_base_bdevs_discovered": 4, 00:14:01.575 "num_base_bdevs_operational": 4, 00:14:01.575 "base_bdevs_list": [ 00:14:01.575 { 00:14:01.575 "name": "NewBaseBdev", 00:14:01.575 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:14:01.575 "is_configured": true, 00:14:01.575 "data_offset": 2048, 00:14:01.575 "data_size": 63488 00:14:01.575 }, 00:14:01.575 { 00:14:01.575 "name": "BaseBdev2", 00:14:01.575 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:14:01.575 "is_configured": true, 00:14:01.575 "data_offset": 2048, 00:14:01.575 "data_size": 63488 00:14:01.575 }, 00:14:01.575 { 00:14:01.575 "name": "BaseBdev3", 00:14:01.575 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:14:01.576 "is_configured": true, 00:14:01.576 "data_offset": 2048, 00:14:01.576 "data_size": 63488 00:14:01.576 }, 00:14:01.576 { 00:14:01.576 "name": "BaseBdev4", 00:14:01.576 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:14:01.576 "is_configured": true, 00:14:01.576 "data_offset": 2048, 00:14:01.576 "data_size": 63488 00:14:01.576 } 00:14:01.576 ] 00:14:01.576 }' 00:14:01.576 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.576 18:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.834 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.834 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:01.834 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:01.834 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:01.834 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:01.834 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:01.834 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:01.834 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:02.092 [2024-07-15 18:27:54.435238] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.092 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:02.092 "name": "Existed_Raid", 00:14:02.092 "aliases": [ 00:14:02.092 "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5" 00:14:02.092 ], 00:14:02.092 "product_name": "Raid Volume", 00:14:02.092 "block_size": 512, 00:14:02.092 "num_blocks": 253952, 00:14:02.092 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:14:02.092 "assigned_rate_limits": { 00:14:02.092 "rw_ios_per_sec": 0, 00:14:02.092 "rw_mbytes_per_sec": 0, 00:14:02.092 "r_mbytes_per_sec": 0, 00:14:02.092 "w_mbytes_per_sec": 0 00:14:02.092 }, 00:14:02.092 "claimed": false, 00:14:02.092 "zoned": false, 00:14:02.092 "supported_io_types": { 00:14:02.092 "read": true, 00:14:02.092 "write": true, 00:14:02.092 "unmap": true, 00:14:02.092 "flush": true, 00:14:02.092 "reset": true, 00:14:02.092 "nvme_admin": false, 00:14:02.092 "nvme_io": false, 00:14:02.092 "nvme_io_md": false, 00:14:02.092 "write_zeroes": true, 00:14:02.092 "zcopy": false, 00:14:02.092 "get_zone_info": false, 00:14:02.092 "zone_management": false, 00:14:02.092 "zone_append": false, 00:14:02.092 "compare": false, 00:14:02.092 "compare_and_write": false, 00:14:02.092 "abort": false, 00:14:02.092 "seek_hole": false, 00:14:02.092 "seek_data": false, 00:14:02.092 "copy": false, 00:14:02.092 "nvme_iov_md": false 00:14:02.092 }, 00:14:02.092 "memory_domains": [ 00:14:02.092 { 00:14:02.092 "dma_device_id": "system", 00:14:02.092 "dma_device_type": 1 00:14:02.092 }, 00:14:02.092 { 00:14:02.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.092 "dma_device_type": 2 00:14:02.092 }, 00:14:02.092 { 00:14:02.092 "dma_device_id": "system", 00:14:02.092 "dma_device_type": 1 00:14:02.092 }, 00:14:02.092 { 00:14:02.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.093 "dma_device_type": 2 00:14:02.093 }, 00:14:02.093 { 00:14:02.093 "dma_device_id": "system", 00:14:02.093 "dma_device_type": 1 00:14:02.093 }, 00:14:02.093 { 00:14:02.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.093 "dma_device_type": 2 00:14:02.093 }, 00:14:02.093 { 00:14:02.093 "dma_device_id": "system", 00:14:02.093 "dma_device_type": 1 00:14:02.093 }, 00:14:02.093 { 00:14:02.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.093 "dma_device_type": 2 00:14:02.093 } 00:14:02.093 ], 00:14:02.093 "driver_specific": { 00:14:02.093 "raid": { 00:14:02.093 "uuid": "edb6f3f3-42d7-11ef-9ade-d5fc5159efa5", 00:14:02.093 "strip_size_kb": 64, 00:14:02.093 "state": "online", 00:14:02.093 "raid_level": "raid0", 00:14:02.093 "superblock": true, 00:14:02.093 "num_base_bdevs": 4, 00:14:02.093 "num_base_bdevs_discovered": 4, 00:14:02.093 "num_base_bdevs_operational": 4, 00:14:02.093 "base_bdevs_list": [ 00:14:02.093 { 00:14:02.093 "name": "NewBaseBdev", 00:14:02.093 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:14:02.093 "is_configured": true, 00:14:02.093 "data_offset": 2048, 00:14:02.093 "data_size": 63488 00:14:02.093 }, 00:14:02.093 { 00:14:02.093 "name": "BaseBdev2", 00:14:02.093 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:14:02.093 "is_configured": true, 00:14:02.093 "data_offset": 2048, 00:14:02.093 "data_size": 63488 00:14:02.093 }, 00:14:02.093 { 00:14:02.093 "name": "BaseBdev3", 00:14:02.093 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:14:02.093 "is_configured": true, 00:14:02.093 "data_offset": 2048, 00:14:02.093 "data_size": 63488 00:14:02.093 }, 00:14:02.093 { 00:14:02.093 "name": "BaseBdev4", 00:14:02.093 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:14:02.093 "is_configured": true, 00:14:02.093 "data_offset": 2048, 00:14:02.093 "data_size": 63488 00:14:02.093 } 00:14:02.093 ] 00:14:02.093 } 00:14:02.093 } 00:14:02.093 }' 00:14:02.093 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.093 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:02.093 BaseBdev2 00:14:02.093 BaseBdev3 00:14:02.093 BaseBdev4' 00:14:02.093 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.093 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.093 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.659 "name": "NewBaseBdev", 00:14:02.659 "aliases": [ 00:14:02.659 "eedc8e85-42d7-11ef-9ade-d5fc5159efa5" 00:14:02.659 ], 00:14:02.659 "product_name": "Malloc disk", 00:14:02.659 "block_size": 512, 00:14:02.659 "num_blocks": 65536, 00:14:02.659 "uuid": "eedc8e85-42d7-11ef-9ade-d5fc5159efa5", 00:14:02.659 "assigned_rate_limits": { 00:14:02.659 "rw_ios_per_sec": 0, 00:14:02.659 "rw_mbytes_per_sec": 0, 00:14:02.659 "r_mbytes_per_sec": 0, 00:14:02.659 "w_mbytes_per_sec": 0 00:14:02.659 }, 00:14:02.659 "claimed": true, 00:14:02.659 "claim_type": "exclusive_write", 00:14:02.659 "zoned": false, 00:14:02.659 "supported_io_types": { 00:14:02.659 "read": true, 00:14:02.659 "write": true, 00:14:02.659 "unmap": true, 00:14:02.659 "flush": true, 00:14:02.659 "reset": true, 00:14:02.659 "nvme_admin": false, 00:14:02.659 "nvme_io": false, 00:14:02.659 "nvme_io_md": false, 00:14:02.659 "write_zeroes": true, 00:14:02.659 "zcopy": true, 00:14:02.659 "get_zone_info": false, 00:14:02.659 "zone_management": false, 00:14:02.659 "zone_append": false, 00:14:02.659 "compare": false, 00:14:02.659 "compare_and_write": false, 00:14:02.659 "abort": true, 00:14:02.659 "seek_hole": false, 00:14:02.659 "seek_data": false, 00:14:02.659 "copy": true, 00:14:02.659 "nvme_iov_md": false 00:14:02.659 }, 00:14:02.659 "memory_domains": [ 00:14:02.659 { 00:14:02.659 "dma_device_id": "system", 00:14:02.659 "dma_device_type": 1 00:14:02.659 }, 00:14:02.659 { 00:14:02.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.659 "dma_device_type": 2 00:14:02.659 } 00:14:02.659 ], 00:14:02.659 "driver_specific": {} 00:14:02.659 }' 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:02.659 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.917 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.917 "name": "BaseBdev2", 00:14:02.917 "aliases": [ 00:14:02.917 "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5" 00:14:02.917 ], 00:14:02.918 "product_name": "Malloc disk", 00:14:02.918 "block_size": 512, 00:14:02.918 "num_blocks": 65536, 00:14:02.918 "uuid": "ec5e14d9-42d7-11ef-9ade-d5fc5159efa5", 00:14:02.918 "assigned_rate_limits": { 00:14:02.918 "rw_ios_per_sec": 0, 00:14:02.918 "rw_mbytes_per_sec": 0, 00:14:02.918 "r_mbytes_per_sec": 0, 00:14:02.918 "w_mbytes_per_sec": 0 00:14:02.918 }, 00:14:02.918 "claimed": true, 00:14:02.918 "claim_type": "exclusive_write", 00:14:02.918 "zoned": false, 00:14:02.918 "supported_io_types": { 00:14:02.918 "read": true, 00:14:02.918 "write": true, 00:14:02.918 "unmap": true, 00:14:02.918 "flush": true, 00:14:02.918 "reset": true, 00:14:02.918 "nvme_admin": false, 00:14:02.918 "nvme_io": false, 00:14:02.918 "nvme_io_md": false, 00:14:02.918 "write_zeroes": true, 00:14:02.918 "zcopy": true, 00:14:02.918 "get_zone_info": false, 00:14:02.918 "zone_management": false, 00:14:02.918 "zone_append": false, 00:14:02.918 "compare": false, 00:14:02.918 "compare_and_write": false, 00:14:02.918 "abort": true, 00:14:02.918 "seek_hole": false, 00:14:02.918 "seek_data": false, 00:14:02.918 "copy": true, 00:14:02.918 "nvme_iov_md": false 00:14:02.918 }, 00:14:02.918 "memory_domains": [ 00:14:02.918 { 00:14:02.918 "dma_device_id": "system", 00:14:02.918 "dma_device_type": 1 00:14:02.918 }, 00:14:02.918 { 00:14:02.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.918 "dma_device_type": 2 00:14:02.918 } 00:14:02.918 ], 00:14:02.918 "driver_specific": {} 00:14:02.918 }' 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:02.918 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.176 "name": "BaseBdev3", 00:14:03.176 "aliases": [ 00:14:03.176 "ecce64c6-42d7-11ef-9ade-d5fc5159efa5" 00:14:03.176 ], 00:14:03.176 "product_name": "Malloc disk", 00:14:03.176 "block_size": 512, 00:14:03.176 "num_blocks": 65536, 00:14:03.176 "uuid": "ecce64c6-42d7-11ef-9ade-d5fc5159efa5", 00:14:03.176 "assigned_rate_limits": { 00:14:03.176 "rw_ios_per_sec": 0, 00:14:03.176 "rw_mbytes_per_sec": 0, 00:14:03.176 "r_mbytes_per_sec": 0, 00:14:03.176 "w_mbytes_per_sec": 0 00:14:03.176 }, 00:14:03.176 "claimed": true, 00:14:03.176 "claim_type": "exclusive_write", 00:14:03.176 "zoned": false, 00:14:03.176 "supported_io_types": { 00:14:03.176 "read": true, 00:14:03.176 "write": true, 00:14:03.176 "unmap": true, 00:14:03.176 "flush": true, 00:14:03.176 "reset": true, 00:14:03.176 "nvme_admin": false, 00:14:03.176 "nvme_io": false, 00:14:03.176 "nvme_io_md": false, 00:14:03.176 "write_zeroes": true, 00:14:03.176 "zcopy": true, 00:14:03.176 "get_zone_info": false, 00:14:03.176 "zone_management": false, 00:14:03.176 "zone_append": false, 00:14:03.176 "compare": false, 00:14:03.176 "compare_and_write": false, 00:14:03.176 "abort": true, 00:14:03.176 "seek_hole": false, 00:14:03.176 "seek_data": false, 00:14:03.176 "copy": true, 00:14:03.176 "nvme_iov_md": false 00:14:03.176 }, 00:14:03.176 "memory_domains": [ 00:14:03.176 { 00:14:03.176 "dma_device_id": "system", 00:14:03.176 "dma_device_type": 1 00:14:03.176 }, 00:14:03.176 { 00:14:03.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.176 "dma_device_type": 2 00:14:03.176 } 00:14:03.176 ], 00:14:03.176 "driver_specific": {} 00:14:03.176 }' 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:03.176 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.435 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.435 "name": "BaseBdev4", 00:14:03.435 "aliases": [ 00:14:03.435 "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5" 00:14:03.435 ], 00:14:03.435 "product_name": "Malloc disk", 00:14:03.435 "block_size": 512, 00:14:03.435 "num_blocks": 65536, 00:14:03.435 "uuid": "ed3ce01e-42d7-11ef-9ade-d5fc5159efa5", 00:14:03.435 "assigned_rate_limits": { 00:14:03.435 "rw_ios_per_sec": 0, 00:14:03.435 "rw_mbytes_per_sec": 0, 00:14:03.435 "r_mbytes_per_sec": 0, 00:14:03.435 "w_mbytes_per_sec": 0 00:14:03.435 }, 00:14:03.435 "claimed": true, 00:14:03.435 "claim_type": "exclusive_write", 00:14:03.435 "zoned": false, 00:14:03.435 "supported_io_types": { 00:14:03.435 "read": true, 00:14:03.435 "write": true, 00:14:03.435 "unmap": true, 00:14:03.435 "flush": true, 00:14:03.435 "reset": true, 00:14:03.435 "nvme_admin": false, 00:14:03.435 "nvme_io": false, 00:14:03.435 "nvme_io_md": false, 00:14:03.435 "write_zeroes": true, 00:14:03.435 "zcopy": true, 00:14:03.435 "get_zone_info": false, 00:14:03.435 "zone_management": false, 00:14:03.435 "zone_append": false, 00:14:03.435 "compare": false, 00:14:03.435 "compare_and_write": false, 00:14:03.435 "abort": true, 00:14:03.435 "seek_hole": false, 00:14:03.435 "seek_data": false, 00:14:03.435 "copy": true, 00:14:03.435 "nvme_iov_md": false 00:14:03.435 }, 00:14:03.435 "memory_domains": [ 00:14:03.435 { 00:14:03.435 "dma_device_id": "system", 00:14:03.435 "dma_device_type": 1 00:14:03.435 }, 00:14:03.435 { 00:14:03.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.435 "dma_device_type": 2 00:14:03.435 } 00:14:03.435 ], 00:14:03.435 "driver_specific": {} 00:14:03.435 }' 00:14:03.435 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.435 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.435 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.435 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.693 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:03.951 [2024-07-15 18:27:56.091311] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.951 [2024-07-15 18:27:56.091339] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.951 [2024-07-15 18:27:56.091365] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.951 [2024-07-15 18:27:56.091383] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.951 [2024-07-15 18:27:56.091387] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb3d63c34f00 name Existed_Raid, state offline 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 59234 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 59234 ']' 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 59234 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 59234 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:03.951 killing process with pid 59234 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59234' 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 59234 00:14:03.951 [2024-07-15 18:27:56.122763] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.951 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 59234 00:14:03.951 [2024-07-15 18:27:56.145791] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.210 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:04.210 00:14:04.210 real 0m28.227s 00:14:04.210 user 0m51.702s 00:14:04.210 sys 0m3.885s 00:14:04.210 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.210 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.210 ************************************ 00:14:04.210 END TEST raid_state_function_test_sb 00:14:04.210 ************************************ 00:14:04.210 18:27:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:04.210 18:27:56 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:04.210 18:27:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:04.210 18:27:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.210 18:27:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.210 ************************************ 00:14:04.210 START TEST raid_superblock_test 00:14:04.210 ************************************ 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=60056 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 60056 /var/tmp/spdk-raid.sock 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 60056 ']' 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.210 18:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.210 [2024-07-15 18:27:56.425117] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:14:04.210 [2024-07-15 18:27:56.425303] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:04.777 EAL: TSC is not safe to use in SMP mode 00:14:04.777 EAL: TSC is not invariant 00:14:04.777 [2024-07-15 18:27:57.040864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.777 [2024-07-15 18:27:57.160439] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:04.777 [2024-07-15 18:27:57.163040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.777 [2024-07-15 18:27:57.164090] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.777 [2024-07-15 18:27:57.164111] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:05.344 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:05.603 malloc1 00:14:05.603 18:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.861 [2024-07-15 18:27:58.017854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.861 [2024-07-15 18:27:58.017916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.862 [2024-07-15 18:27:58.017930] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada634780 00:14:05.862 [2024-07-15 18:27:58.017939] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.862 [2024-07-15 18:27:58.018887] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.862 [2024-07-15 18:27:58.018917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.862 pt1 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:05.862 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:06.120 malloc2 00:14:06.120 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.377 [2024-07-15 18:27:58.605899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.377 [2024-07-15 18:27:58.605960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.377 [2024-07-15 18:27:58.605973] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada634c80 00:14:06.377 [2024-07-15 18:27:58.605982] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.377 [2024-07-15 18:27:58.606699] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.378 [2024-07-15 18:27:58.606729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.378 pt2 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:06.378 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:06.644 malloc3 00:14:06.644 18:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:06.919 [2024-07-15 18:27:59.189944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:06.919 [2024-07-15 18:27:59.190006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.919 [2024-07-15 18:27:59.190019] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada635180 00:14:06.919 [2024-07-15 18:27:59.190028] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.919 [2024-07-15 18:27:59.190717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.919 [2024-07-15 18:27:59.190747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:06.919 pt3 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:06.919 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:07.176 malloc4 00:14:07.176 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:07.433 [2024-07-15 18:27:59.769977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:07.433 [2024-07-15 18:27:59.770035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.433 [2024-07-15 18:27:59.770048] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada635680 00:14:07.433 [2024-07-15 18:27:59.770056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.433 [2024-07-15 18:27:59.770745] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.433 [2024-07-15 18:27:59.770776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:07.433 pt4 00:14:07.433 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:07.433 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:07.433 18:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:07.692 [2024-07-15 18:28:00.074016] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:07.692 [2024-07-15 18:28:00.074649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.692 [2024-07-15 18:28:00.074673] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:07.692 [2024-07-15 18:28:00.074686] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:07.692 [2024-07-15 18:28:00.074739] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x342ada635900 00:14:07.692 [2024-07-15 18:28:00.074746] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:07.692 [2024-07-15 18:28:00.074781] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x342ada697e20 00:14:07.692 [2024-07-15 18:28:00.074865] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x342ada635900 00:14:07.692 [2024-07-15 18:28:00.074872] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x342ada635900 00:14:07.692 [2024-07-15 18:28:00.074914] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.950 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.209 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.209 "name": "raid_bdev1", 00:14:08.209 "uuid": "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5", 00:14:08.209 "strip_size_kb": 64, 00:14:08.209 "state": "online", 00:14:08.209 "raid_level": "raid0", 00:14:08.209 "superblock": true, 00:14:08.209 "num_base_bdevs": 4, 00:14:08.209 "num_base_bdevs_discovered": 4, 00:14:08.209 "num_base_bdevs_operational": 4, 00:14:08.209 "base_bdevs_list": [ 00:14:08.209 { 00:14:08.209 "name": "pt1", 00:14:08.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:08.209 "is_configured": true, 00:14:08.209 "data_offset": 2048, 00:14:08.209 "data_size": 63488 00:14:08.209 }, 00:14:08.209 { 00:14:08.209 "name": "pt2", 00:14:08.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.209 "is_configured": true, 00:14:08.209 "data_offset": 2048, 00:14:08.209 "data_size": 63488 00:14:08.209 }, 00:14:08.209 { 00:14:08.209 "name": "pt3", 00:14:08.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.209 "is_configured": true, 00:14:08.209 "data_offset": 2048, 00:14:08.209 "data_size": 63488 00:14:08.209 }, 00:14:08.209 { 00:14:08.209 "name": "pt4", 00:14:08.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.209 "is_configured": true, 00:14:08.209 "data_offset": 2048, 00:14:08.209 "data_size": 63488 00:14:08.209 } 00:14:08.209 ] 00:14:08.209 }' 00:14:08.209 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.209 18:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.467 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:08.467 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:08.467 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:08.467 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:08.467 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:08.467 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:08.467 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:08.467 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:08.726 [2024-07-15 18:28:00.974119] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.726 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:08.726 "name": "raid_bdev1", 00:14:08.726 "aliases": [ 00:14:08.726 "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5" 00:14:08.726 ], 00:14:08.726 "product_name": "Raid Volume", 00:14:08.726 "block_size": 512, 00:14:08.726 "num_blocks": 253952, 00:14:08.726 "uuid": "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5", 00:14:08.726 "assigned_rate_limits": { 00:14:08.726 "rw_ios_per_sec": 0, 00:14:08.726 "rw_mbytes_per_sec": 0, 00:14:08.726 "r_mbytes_per_sec": 0, 00:14:08.726 "w_mbytes_per_sec": 0 00:14:08.726 }, 00:14:08.726 "claimed": false, 00:14:08.726 "zoned": false, 00:14:08.726 "supported_io_types": { 00:14:08.726 "read": true, 00:14:08.726 "write": true, 00:14:08.726 "unmap": true, 00:14:08.726 "flush": true, 00:14:08.726 "reset": true, 00:14:08.726 "nvme_admin": false, 00:14:08.726 "nvme_io": false, 00:14:08.726 "nvme_io_md": false, 00:14:08.726 "write_zeroes": true, 00:14:08.726 "zcopy": false, 00:14:08.726 "get_zone_info": false, 00:14:08.726 "zone_management": false, 00:14:08.726 "zone_append": false, 00:14:08.726 "compare": false, 00:14:08.726 "compare_and_write": false, 00:14:08.726 "abort": false, 00:14:08.726 "seek_hole": false, 00:14:08.726 "seek_data": false, 00:14:08.726 "copy": false, 00:14:08.726 "nvme_iov_md": false 00:14:08.726 }, 00:14:08.726 "memory_domains": [ 00:14:08.726 { 00:14:08.726 "dma_device_id": "system", 00:14:08.726 "dma_device_type": 1 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.726 "dma_device_type": 2 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "dma_device_id": "system", 00:14:08.726 "dma_device_type": 1 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.726 "dma_device_type": 2 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "dma_device_id": "system", 00:14:08.726 "dma_device_type": 1 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.726 "dma_device_type": 2 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "dma_device_id": "system", 00:14:08.726 "dma_device_type": 1 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.726 "dma_device_type": 2 00:14:08.726 } 00:14:08.726 ], 00:14:08.726 "driver_specific": { 00:14:08.726 "raid": { 00:14:08.726 "uuid": "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5", 00:14:08.726 "strip_size_kb": 64, 00:14:08.726 "state": "online", 00:14:08.726 "raid_level": "raid0", 00:14:08.726 "superblock": true, 00:14:08.726 "num_base_bdevs": 4, 00:14:08.726 "num_base_bdevs_discovered": 4, 00:14:08.726 "num_base_bdevs_operational": 4, 00:14:08.726 "base_bdevs_list": [ 00:14:08.726 { 00:14:08.726 "name": "pt1", 00:14:08.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:08.726 "is_configured": true, 00:14:08.726 "data_offset": 2048, 00:14:08.726 "data_size": 63488 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "name": "pt2", 00:14:08.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.726 "is_configured": true, 00:14:08.726 "data_offset": 2048, 00:14:08.726 "data_size": 63488 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "name": "pt3", 00:14:08.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.726 "is_configured": true, 00:14:08.726 "data_offset": 2048, 00:14:08.726 "data_size": 63488 00:14:08.726 }, 00:14:08.726 { 00:14:08.726 "name": "pt4", 00:14:08.726 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.726 "is_configured": true, 00:14:08.726 "data_offset": 2048, 00:14:08.726 "data_size": 63488 00:14:08.726 } 00:14:08.726 ] 00:14:08.726 } 00:14:08.726 } 00:14:08.726 }' 00:14:08.726 18:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.726 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:08.726 pt2 00:14:08.726 pt3 00:14:08.726 pt4' 00:14:08.726 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.726 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:08.726 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:08.984 "name": "pt1", 00:14:08.984 "aliases": [ 00:14:08.984 "00000000-0000-0000-0000-000000000001" 00:14:08.984 ], 00:14:08.984 "product_name": "passthru", 00:14:08.984 "block_size": 512, 00:14:08.984 "num_blocks": 65536, 00:14:08.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:08.984 "assigned_rate_limits": { 00:14:08.984 "rw_ios_per_sec": 0, 00:14:08.984 "rw_mbytes_per_sec": 0, 00:14:08.984 "r_mbytes_per_sec": 0, 00:14:08.984 "w_mbytes_per_sec": 0 00:14:08.984 }, 00:14:08.984 "claimed": true, 00:14:08.984 "claim_type": "exclusive_write", 00:14:08.984 "zoned": false, 00:14:08.984 "supported_io_types": { 00:14:08.984 "read": true, 00:14:08.984 "write": true, 00:14:08.984 "unmap": true, 00:14:08.984 "flush": true, 00:14:08.984 "reset": true, 00:14:08.984 "nvme_admin": false, 00:14:08.984 "nvme_io": false, 00:14:08.984 "nvme_io_md": false, 00:14:08.984 "write_zeroes": true, 00:14:08.984 "zcopy": true, 00:14:08.984 "get_zone_info": false, 00:14:08.984 "zone_management": false, 00:14:08.984 "zone_append": false, 00:14:08.984 "compare": false, 00:14:08.984 "compare_and_write": false, 00:14:08.984 "abort": true, 00:14:08.984 "seek_hole": false, 00:14:08.984 "seek_data": false, 00:14:08.984 "copy": true, 00:14:08.984 "nvme_iov_md": false 00:14:08.984 }, 00:14:08.984 "memory_domains": [ 00:14:08.984 { 00:14:08.984 "dma_device_id": "system", 00:14:08.984 "dma_device_type": 1 00:14:08.984 }, 00:14:08.984 { 00:14:08.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.984 "dma_device_type": 2 00:14:08.984 } 00:14:08.984 ], 00:14:08.984 "driver_specific": { 00:14:08.984 "passthru": { 00:14:08.984 "name": "pt1", 00:14:08.984 "base_bdev_name": "malloc1" 00:14:08.984 } 00:14:08.984 } 00:14:08.984 }' 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:08.984 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:09.242 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:09.242 "name": "pt2", 00:14:09.242 "aliases": [ 00:14:09.242 "00000000-0000-0000-0000-000000000002" 00:14:09.242 ], 00:14:09.242 "product_name": "passthru", 00:14:09.242 "block_size": 512, 00:14:09.242 "num_blocks": 65536, 00:14:09.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.242 "assigned_rate_limits": { 00:14:09.242 "rw_ios_per_sec": 0, 00:14:09.242 "rw_mbytes_per_sec": 0, 00:14:09.242 "r_mbytes_per_sec": 0, 00:14:09.242 "w_mbytes_per_sec": 0 00:14:09.242 }, 00:14:09.242 "claimed": true, 00:14:09.242 "claim_type": "exclusive_write", 00:14:09.242 "zoned": false, 00:14:09.242 "supported_io_types": { 00:14:09.242 "read": true, 00:14:09.242 "write": true, 00:14:09.242 "unmap": true, 00:14:09.242 "flush": true, 00:14:09.242 "reset": true, 00:14:09.242 "nvme_admin": false, 00:14:09.242 "nvme_io": false, 00:14:09.242 "nvme_io_md": false, 00:14:09.242 "write_zeroes": true, 00:14:09.242 "zcopy": true, 00:14:09.242 "get_zone_info": false, 00:14:09.242 "zone_management": false, 00:14:09.242 "zone_append": false, 00:14:09.242 "compare": false, 00:14:09.242 "compare_and_write": false, 00:14:09.242 "abort": true, 00:14:09.242 "seek_hole": false, 00:14:09.242 "seek_data": false, 00:14:09.242 "copy": true, 00:14:09.242 "nvme_iov_md": false 00:14:09.242 }, 00:14:09.242 "memory_domains": [ 00:14:09.242 { 00:14:09.242 "dma_device_id": "system", 00:14:09.242 "dma_device_type": 1 00:14:09.242 }, 00:14:09.242 { 00:14:09.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.242 "dma_device_type": 2 00:14:09.242 } 00:14:09.242 ], 00:14:09.242 "driver_specific": { 00:14:09.242 "passthru": { 00:14:09.242 "name": "pt2", 00:14:09.242 "base_bdev_name": "malloc2" 00:14:09.242 } 00:14:09.242 } 00:14:09.242 }' 00:14:09.242 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.242 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.242 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.242 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:09.500 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:09.758 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:09.758 "name": "pt3", 00:14:09.758 "aliases": [ 00:14:09.758 "00000000-0000-0000-0000-000000000003" 00:14:09.758 ], 00:14:09.758 "product_name": "passthru", 00:14:09.758 "block_size": 512, 00:14:09.758 "num_blocks": 65536, 00:14:09.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.758 "assigned_rate_limits": { 00:14:09.758 "rw_ios_per_sec": 0, 00:14:09.758 "rw_mbytes_per_sec": 0, 00:14:09.758 "r_mbytes_per_sec": 0, 00:14:09.758 "w_mbytes_per_sec": 0 00:14:09.758 }, 00:14:09.758 "claimed": true, 00:14:09.758 "claim_type": "exclusive_write", 00:14:09.758 "zoned": false, 00:14:09.758 "supported_io_types": { 00:14:09.758 "read": true, 00:14:09.758 "write": true, 00:14:09.758 "unmap": true, 00:14:09.758 "flush": true, 00:14:09.758 "reset": true, 00:14:09.758 "nvme_admin": false, 00:14:09.758 "nvme_io": false, 00:14:09.758 "nvme_io_md": false, 00:14:09.758 "write_zeroes": true, 00:14:09.758 "zcopy": true, 00:14:09.758 "get_zone_info": false, 00:14:09.758 "zone_management": false, 00:14:09.758 "zone_append": false, 00:14:09.758 "compare": false, 00:14:09.758 "compare_and_write": false, 00:14:09.758 "abort": true, 00:14:09.758 "seek_hole": false, 00:14:09.758 "seek_data": false, 00:14:09.758 "copy": true, 00:14:09.758 "nvme_iov_md": false 00:14:09.758 }, 00:14:09.758 "memory_domains": [ 00:14:09.758 { 00:14:09.758 "dma_device_id": "system", 00:14:09.758 "dma_device_type": 1 00:14:09.758 }, 00:14:09.758 { 00:14:09.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.758 "dma_device_type": 2 00:14:09.758 } 00:14:09.758 ], 00:14:09.758 "driver_specific": { 00:14:09.758 "passthru": { 00:14:09.758 "name": "pt3", 00:14:09.758 "base_bdev_name": "malloc3" 00:14:09.758 } 00:14:09.758 } 00:14:09.758 }' 00:14:09.758 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.758 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.758 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.758 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.758 18:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:09.758 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.017 "name": "pt4", 00:14:10.017 "aliases": [ 00:14:10.017 "00000000-0000-0000-0000-000000000004" 00:14:10.017 ], 00:14:10.017 "product_name": "passthru", 00:14:10.017 "block_size": 512, 00:14:10.017 "num_blocks": 65536, 00:14:10.017 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:10.017 "assigned_rate_limits": { 00:14:10.017 "rw_ios_per_sec": 0, 00:14:10.017 "rw_mbytes_per_sec": 0, 00:14:10.017 "r_mbytes_per_sec": 0, 00:14:10.017 "w_mbytes_per_sec": 0 00:14:10.017 }, 00:14:10.017 "claimed": true, 00:14:10.017 "claim_type": "exclusive_write", 00:14:10.017 "zoned": false, 00:14:10.017 "supported_io_types": { 00:14:10.017 "read": true, 00:14:10.017 "write": true, 00:14:10.017 "unmap": true, 00:14:10.017 "flush": true, 00:14:10.017 "reset": true, 00:14:10.017 "nvme_admin": false, 00:14:10.017 "nvme_io": false, 00:14:10.017 "nvme_io_md": false, 00:14:10.017 "write_zeroes": true, 00:14:10.017 "zcopy": true, 00:14:10.017 "get_zone_info": false, 00:14:10.017 "zone_management": false, 00:14:10.017 "zone_append": false, 00:14:10.017 "compare": false, 00:14:10.017 "compare_and_write": false, 00:14:10.017 "abort": true, 00:14:10.017 "seek_hole": false, 00:14:10.017 "seek_data": false, 00:14:10.017 "copy": true, 00:14:10.017 "nvme_iov_md": false 00:14:10.017 }, 00:14:10.017 "memory_domains": [ 00:14:10.017 { 00:14:10.017 "dma_device_id": "system", 00:14:10.017 "dma_device_type": 1 00:14:10.017 }, 00:14:10.017 { 00:14:10.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.017 "dma_device_type": 2 00:14:10.017 } 00:14:10.017 ], 00:14:10.017 "driver_specific": { 00:14:10.017 "passthru": { 00:14:10.017 "name": "pt4", 00:14:10.017 "base_bdev_name": "malloc4" 00:14:10.017 } 00:14:10.017 } 00:14:10.017 }' 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:10.017 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:10.275 [2024-07-15 18:28:02.622298] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.275 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5 00:14:10.275 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5 ']' 00:14:10.275 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:10.533 [2024-07-15 18:28:02.894224] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.533 [2024-07-15 18:28:02.894254] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.533 [2024-07-15 18:28:02.894280] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.533 [2024-07-15 18:28:02.894297] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.533 [2024-07-15 18:28:02.894302] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x342ada635900 name raid_bdev1, state offline 00:14:10.533 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.533 18:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:10.791 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:10.791 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:10.791 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:10.791 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:11.358 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:11.358 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:11.358 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:11.358 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:11.616 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:11.616 18:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:11.873 18:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:11.873 18:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.437 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:12.438 [2024-07-15 18:28:04.758371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:12.438 [2024-07-15 18:28:04.758976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:12.438 [2024-07-15 18:28:04.759008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:12.438 [2024-07-15 18:28:04.759018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:12.438 [2024-07-15 18:28:04.759033] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:12.438 [2024-07-15 18:28:04.759071] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:12.438 [2024-07-15 18:28:04.759084] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:12.438 [2024-07-15 18:28:04.759093] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:12.438 [2024-07-15 18:28:04.759102] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.438 [2024-07-15 18:28:04.759106] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x342ada635680 name raid_bdev1, state configuring 00:14:12.438 request: 00:14:12.438 { 00:14:12.438 "name": "raid_bdev1", 00:14:12.438 "raid_level": "raid0", 00:14:12.438 "base_bdevs": [ 00:14:12.438 "malloc1", 00:14:12.438 "malloc2", 00:14:12.438 "malloc3", 00:14:12.438 "malloc4" 00:14:12.438 ], 00:14:12.438 "strip_size_kb": 64, 00:14:12.438 "superblock": false, 00:14:12.438 "method": "bdev_raid_create", 00:14:12.438 "req_id": 1 00:14:12.438 } 00:14:12.438 Got JSON-RPC error response 00:14:12.438 response: 00:14:12.438 { 00:14:12.438 "code": -17, 00:14:12.438 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:12.438 } 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.438 18:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:12.695 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:12.695 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:12.695 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:12.953 [2024-07-15 18:28:05.282398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:12.953 [2024-07-15 18:28:05.282452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.953 [2024-07-15 18:28:05.282465] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada635180 00:14:12.953 [2024-07-15 18:28:05.282473] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.953 [2024-07-15 18:28:05.283155] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.953 [2024-07-15 18:28:05.283179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:12.953 [2024-07-15 18:28:05.283205] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:12.953 [2024-07-15 18:28:05.283217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:12.953 pt1 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.953 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.211 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.211 "name": "raid_bdev1", 00:14:13.211 "uuid": "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5", 00:14:13.211 "strip_size_kb": 64, 00:14:13.211 "state": "configuring", 00:14:13.211 "raid_level": "raid0", 00:14:13.211 "superblock": true, 00:14:13.211 "num_base_bdevs": 4, 00:14:13.211 "num_base_bdevs_discovered": 1, 00:14:13.211 "num_base_bdevs_operational": 4, 00:14:13.211 "base_bdevs_list": [ 00:14:13.211 { 00:14:13.211 "name": "pt1", 00:14:13.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:13.211 "is_configured": true, 00:14:13.211 "data_offset": 2048, 00:14:13.211 "data_size": 63488 00:14:13.211 }, 00:14:13.211 { 00:14:13.211 "name": null, 00:14:13.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.211 "is_configured": false, 00:14:13.211 "data_offset": 2048, 00:14:13.211 "data_size": 63488 00:14:13.211 }, 00:14:13.211 { 00:14:13.211 "name": null, 00:14:13.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.211 "is_configured": false, 00:14:13.211 "data_offset": 2048, 00:14:13.211 "data_size": 63488 00:14:13.211 }, 00:14:13.211 { 00:14:13.211 "name": null, 00:14:13.211 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:13.211 "is_configured": false, 00:14:13.211 "data_offset": 2048, 00:14:13.211 "data_size": 63488 00:14:13.211 } 00:14:13.211 ] 00:14:13.211 }' 00:14:13.211 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.211 18:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.776 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:14:13.776 18:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:14.034 [2024-07-15 18:28:06.166474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:14.034 [2024-07-15 18:28:06.166532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.034 [2024-07-15 18:28:06.166545] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada634780 00:14:14.034 [2024-07-15 18:28:06.166554] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.034 [2024-07-15 18:28:06.166672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.034 [2024-07-15 18:28:06.166684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:14.034 [2024-07-15 18:28:06.166708] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:14.034 [2024-07-15 18:28:06.166718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.034 pt2 00:14:14.034 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:14.292 [2024-07-15 18:28:06.438499] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.292 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.551 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.551 "name": "raid_bdev1", 00:14:14.551 "uuid": "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5", 00:14:14.551 "strip_size_kb": 64, 00:14:14.551 "state": "configuring", 00:14:14.551 "raid_level": "raid0", 00:14:14.551 "superblock": true, 00:14:14.551 "num_base_bdevs": 4, 00:14:14.551 "num_base_bdevs_discovered": 1, 00:14:14.551 "num_base_bdevs_operational": 4, 00:14:14.551 "base_bdevs_list": [ 00:14:14.551 { 00:14:14.551 "name": "pt1", 00:14:14.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.551 "is_configured": true, 00:14:14.551 "data_offset": 2048, 00:14:14.551 "data_size": 63488 00:14:14.551 }, 00:14:14.551 { 00:14:14.551 "name": null, 00:14:14.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.551 "is_configured": false, 00:14:14.551 "data_offset": 2048, 00:14:14.551 "data_size": 63488 00:14:14.551 }, 00:14:14.551 { 00:14:14.551 "name": null, 00:14:14.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.551 "is_configured": false, 00:14:14.551 "data_offset": 2048, 00:14:14.551 "data_size": 63488 00:14:14.551 }, 00:14:14.551 { 00:14:14.551 "name": null, 00:14:14.551 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:14.551 "is_configured": false, 00:14:14.551 "data_offset": 2048, 00:14:14.551 "data_size": 63488 00:14:14.551 } 00:14:14.551 ] 00:14:14.551 }' 00:14:14.552 18:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.552 18:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.810 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:14.810 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:14.810 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.068 [2024-07-15 18:28:07.362564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.068 [2024-07-15 18:28:07.362635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.068 [2024-07-15 18:28:07.362648] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada634780 00:14:15.068 [2024-07-15 18:28:07.362656] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.068 [2024-07-15 18:28:07.362773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.068 [2024-07-15 18:28:07.362785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.068 [2024-07-15 18:28:07.362809] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:15.068 [2024-07-15 18:28:07.362818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.068 pt2 00:14:15.068 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:15.068 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:15.068 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:15.327 [2024-07-15 18:28:07.610585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:15.327 [2024-07-15 18:28:07.610640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.327 [2024-07-15 18:28:07.610652] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada635b80 00:14:15.327 [2024-07-15 18:28:07.610660] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.327 [2024-07-15 18:28:07.610777] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.327 [2024-07-15 18:28:07.610789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:15.327 [2024-07-15 18:28:07.610812] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:15.327 [2024-07-15 18:28:07.610821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:15.327 pt3 00:14:15.327 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:15.327 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:15.327 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:15.585 [2024-07-15 18:28:07.834612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:15.585 [2024-07-15 18:28:07.834664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.585 [2024-07-15 18:28:07.834676] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x342ada635900 00:14:15.585 [2024-07-15 18:28:07.834684] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.585 [2024-07-15 18:28:07.834802] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.585 [2024-07-15 18:28:07.834813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:15.585 [2024-07-15 18:28:07.834836] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:15.585 [2024-07-15 18:28:07.834845] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:15.585 [2024-07-15 18:28:07.834877] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x342ada634c80 00:14:15.585 [2024-07-15 18:28:07.834881] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:15.585 [2024-07-15 18:28:07.834902] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x342ada697e20 00:14:15.585 [2024-07-15 18:28:07.834957] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x342ada634c80 00:14:15.585 [2024-07-15 18:28:07.834962] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x342ada634c80 00:14:15.585 [2024-07-15 18:28:07.834984] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.585 pt4 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.585 18:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.844 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.844 "name": "raid_bdev1", 00:14:15.844 "uuid": "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5", 00:14:15.844 "strip_size_kb": 64, 00:14:15.844 "state": "online", 00:14:15.844 "raid_level": "raid0", 00:14:15.844 "superblock": true, 00:14:15.844 "num_base_bdevs": 4, 00:14:15.844 "num_base_bdevs_discovered": 4, 00:14:15.844 "num_base_bdevs_operational": 4, 00:14:15.844 "base_bdevs_list": [ 00:14:15.845 { 00:14:15.845 "name": "pt1", 00:14:15.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.845 "is_configured": true, 00:14:15.845 "data_offset": 2048, 00:14:15.845 "data_size": 63488 00:14:15.845 }, 00:14:15.845 { 00:14:15.845 "name": "pt2", 00:14:15.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.845 "is_configured": true, 00:14:15.845 "data_offset": 2048, 00:14:15.845 "data_size": 63488 00:14:15.845 }, 00:14:15.845 { 00:14:15.845 "name": "pt3", 00:14:15.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.845 "is_configured": true, 00:14:15.845 "data_offset": 2048, 00:14:15.845 "data_size": 63488 00:14:15.845 }, 00:14:15.845 { 00:14:15.845 "name": "pt4", 00:14:15.845 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:15.845 "is_configured": true, 00:14:15.845 "data_offset": 2048, 00:14:15.845 "data_size": 63488 00:14:15.845 } 00:14:15.845 ] 00:14:15.845 }' 00:14:15.845 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.845 18:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.411 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:16.411 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:16.411 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:16.411 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:16.411 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:16.411 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:16.411 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:16.411 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:16.411 [2024-07-15 18:28:08.798737] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.670 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:16.670 "name": "raid_bdev1", 00:14:16.670 "aliases": [ 00:14:16.670 "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5" 00:14:16.670 ], 00:14:16.670 "product_name": "Raid Volume", 00:14:16.670 "block_size": 512, 00:14:16.670 "num_blocks": 253952, 00:14:16.670 "uuid": "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5", 00:14:16.670 "assigned_rate_limits": { 00:14:16.670 "rw_ios_per_sec": 0, 00:14:16.670 "rw_mbytes_per_sec": 0, 00:14:16.670 "r_mbytes_per_sec": 0, 00:14:16.670 "w_mbytes_per_sec": 0 00:14:16.670 }, 00:14:16.670 "claimed": false, 00:14:16.670 "zoned": false, 00:14:16.670 "supported_io_types": { 00:14:16.670 "read": true, 00:14:16.670 "write": true, 00:14:16.670 "unmap": true, 00:14:16.670 "flush": true, 00:14:16.670 "reset": true, 00:14:16.670 "nvme_admin": false, 00:14:16.670 "nvme_io": false, 00:14:16.670 "nvme_io_md": false, 00:14:16.670 "write_zeroes": true, 00:14:16.670 "zcopy": false, 00:14:16.670 "get_zone_info": false, 00:14:16.670 "zone_management": false, 00:14:16.670 "zone_append": false, 00:14:16.670 "compare": false, 00:14:16.670 "compare_and_write": false, 00:14:16.670 "abort": false, 00:14:16.670 "seek_hole": false, 00:14:16.670 "seek_data": false, 00:14:16.670 "copy": false, 00:14:16.670 "nvme_iov_md": false 00:14:16.670 }, 00:14:16.670 "memory_domains": [ 00:14:16.670 { 00:14:16.670 "dma_device_id": "system", 00:14:16.670 "dma_device_type": 1 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.670 "dma_device_type": 2 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "dma_device_id": "system", 00:14:16.670 "dma_device_type": 1 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.670 "dma_device_type": 2 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "dma_device_id": "system", 00:14:16.670 "dma_device_type": 1 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.670 "dma_device_type": 2 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "dma_device_id": "system", 00:14:16.670 "dma_device_type": 1 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.670 "dma_device_type": 2 00:14:16.670 } 00:14:16.670 ], 00:14:16.670 "driver_specific": { 00:14:16.670 "raid": { 00:14:16.670 "uuid": "f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5", 00:14:16.670 "strip_size_kb": 64, 00:14:16.670 "state": "online", 00:14:16.670 "raid_level": "raid0", 00:14:16.670 "superblock": true, 00:14:16.670 "num_base_bdevs": 4, 00:14:16.670 "num_base_bdevs_discovered": 4, 00:14:16.670 "num_base_bdevs_operational": 4, 00:14:16.670 "base_bdevs_list": [ 00:14:16.670 { 00:14:16.670 "name": "pt1", 00:14:16.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.670 "is_configured": true, 00:14:16.670 "data_offset": 2048, 00:14:16.670 "data_size": 63488 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "name": "pt2", 00:14:16.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.670 "is_configured": true, 00:14:16.670 "data_offset": 2048, 00:14:16.670 "data_size": 63488 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "name": "pt3", 00:14:16.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.670 "is_configured": true, 00:14:16.670 "data_offset": 2048, 00:14:16.670 "data_size": 63488 00:14:16.670 }, 00:14:16.670 { 00:14:16.670 "name": "pt4", 00:14:16.670 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:16.670 "is_configured": true, 00:14:16.670 "data_offset": 2048, 00:14:16.670 "data_size": 63488 00:14:16.670 } 00:14:16.670 ] 00:14:16.670 } 00:14:16.670 } 00:14:16.670 }' 00:14:16.670 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.670 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:16.670 pt2 00:14:16.670 pt3 00:14:16.670 pt4' 00:14:16.670 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.670 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.670 18:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.929 "name": "pt1", 00:14:16.929 "aliases": [ 00:14:16.929 "00000000-0000-0000-0000-000000000001" 00:14:16.929 ], 00:14:16.929 "product_name": "passthru", 00:14:16.929 "block_size": 512, 00:14:16.929 "num_blocks": 65536, 00:14:16.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.929 "assigned_rate_limits": { 00:14:16.929 "rw_ios_per_sec": 0, 00:14:16.929 "rw_mbytes_per_sec": 0, 00:14:16.929 "r_mbytes_per_sec": 0, 00:14:16.929 "w_mbytes_per_sec": 0 00:14:16.929 }, 00:14:16.929 "claimed": true, 00:14:16.929 "claim_type": "exclusive_write", 00:14:16.929 "zoned": false, 00:14:16.929 "supported_io_types": { 00:14:16.929 "read": true, 00:14:16.929 "write": true, 00:14:16.929 "unmap": true, 00:14:16.929 "flush": true, 00:14:16.929 "reset": true, 00:14:16.929 "nvme_admin": false, 00:14:16.929 "nvme_io": false, 00:14:16.929 "nvme_io_md": false, 00:14:16.929 "write_zeroes": true, 00:14:16.929 "zcopy": true, 00:14:16.929 "get_zone_info": false, 00:14:16.929 "zone_management": false, 00:14:16.929 "zone_append": false, 00:14:16.929 "compare": false, 00:14:16.929 "compare_and_write": false, 00:14:16.929 "abort": true, 00:14:16.929 "seek_hole": false, 00:14:16.929 "seek_data": false, 00:14:16.929 "copy": true, 00:14:16.929 "nvme_iov_md": false 00:14:16.929 }, 00:14:16.929 "memory_domains": [ 00:14:16.929 { 00:14:16.929 "dma_device_id": "system", 00:14:16.929 "dma_device_type": 1 00:14:16.929 }, 00:14:16.929 { 00:14:16.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.929 "dma_device_type": 2 00:14:16.929 } 00:14:16.929 ], 00:14:16.929 "driver_specific": { 00:14:16.929 "passthru": { 00:14:16.929 "name": "pt1", 00:14:16.929 "base_bdev_name": "malloc1" 00:14:16.929 } 00:14:16.929 } 00:14:16.929 }' 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:16.929 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.198 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.198 "name": "pt2", 00:14:17.198 "aliases": [ 00:14:17.198 "00000000-0000-0000-0000-000000000002" 00:14:17.198 ], 00:14:17.198 "product_name": "passthru", 00:14:17.198 "block_size": 512, 00:14:17.198 "num_blocks": 65536, 00:14:17.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.198 "assigned_rate_limits": { 00:14:17.198 "rw_ios_per_sec": 0, 00:14:17.198 "rw_mbytes_per_sec": 0, 00:14:17.198 "r_mbytes_per_sec": 0, 00:14:17.198 "w_mbytes_per_sec": 0 00:14:17.198 }, 00:14:17.198 "claimed": true, 00:14:17.198 "claim_type": "exclusive_write", 00:14:17.198 "zoned": false, 00:14:17.198 "supported_io_types": { 00:14:17.198 "read": true, 00:14:17.198 "write": true, 00:14:17.198 "unmap": true, 00:14:17.198 "flush": true, 00:14:17.198 "reset": true, 00:14:17.198 "nvme_admin": false, 00:14:17.198 "nvme_io": false, 00:14:17.198 "nvme_io_md": false, 00:14:17.198 "write_zeroes": true, 00:14:17.198 "zcopy": true, 00:14:17.198 "get_zone_info": false, 00:14:17.198 "zone_management": false, 00:14:17.198 "zone_append": false, 00:14:17.198 "compare": false, 00:14:17.198 "compare_and_write": false, 00:14:17.198 "abort": true, 00:14:17.198 "seek_hole": false, 00:14:17.198 "seek_data": false, 00:14:17.198 "copy": true, 00:14:17.198 "nvme_iov_md": false 00:14:17.198 }, 00:14:17.198 "memory_domains": [ 00:14:17.198 { 00:14:17.198 "dma_device_id": "system", 00:14:17.198 "dma_device_type": 1 00:14:17.198 }, 00:14:17.198 { 00:14:17.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.198 "dma_device_type": 2 00:14:17.198 } 00:14:17.198 ], 00:14:17.198 "driver_specific": { 00:14:17.198 "passthru": { 00:14:17.198 "name": "pt2", 00:14:17.198 "base_bdev_name": "malloc2" 00:14:17.198 } 00:14:17.198 } 00:14:17.198 }' 00:14:17.198 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.198 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:17.199 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.489 "name": "pt3", 00:14:17.489 "aliases": [ 00:14:17.489 "00000000-0000-0000-0000-000000000003" 00:14:17.489 ], 00:14:17.489 "product_name": "passthru", 00:14:17.489 "block_size": 512, 00:14:17.489 "num_blocks": 65536, 00:14:17.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.489 "assigned_rate_limits": { 00:14:17.489 "rw_ios_per_sec": 0, 00:14:17.489 "rw_mbytes_per_sec": 0, 00:14:17.489 "r_mbytes_per_sec": 0, 00:14:17.489 "w_mbytes_per_sec": 0 00:14:17.489 }, 00:14:17.489 "claimed": true, 00:14:17.489 "claim_type": "exclusive_write", 00:14:17.489 "zoned": false, 00:14:17.489 "supported_io_types": { 00:14:17.489 "read": true, 00:14:17.489 "write": true, 00:14:17.489 "unmap": true, 00:14:17.489 "flush": true, 00:14:17.489 "reset": true, 00:14:17.489 "nvme_admin": false, 00:14:17.489 "nvme_io": false, 00:14:17.489 "nvme_io_md": false, 00:14:17.489 "write_zeroes": true, 00:14:17.489 "zcopy": true, 00:14:17.489 "get_zone_info": false, 00:14:17.489 "zone_management": false, 00:14:17.489 "zone_append": false, 00:14:17.489 "compare": false, 00:14:17.489 "compare_and_write": false, 00:14:17.489 "abort": true, 00:14:17.489 "seek_hole": false, 00:14:17.489 "seek_data": false, 00:14:17.489 "copy": true, 00:14:17.489 "nvme_iov_md": false 00:14:17.489 }, 00:14:17.489 "memory_domains": [ 00:14:17.489 { 00:14:17.489 "dma_device_id": "system", 00:14:17.489 "dma_device_type": 1 00:14:17.489 }, 00:14:17.489 { 00:14:17.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.489 "dma_device_type": 2 00:14:17.489 } 00:14:17.489 ], 00:14:17.489 "driver_specific": { 00:14:17.489 "passthru": { 00:14:17.489 "name": "pt3", 00:14:17.489 "base_bdev_name": "malloc3" 00:14:17.489 } 00:14:17.489 } 00:14:17.489 }' 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:17.489 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:18.056 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:18.056 "name": "pt4", 00:14:18.056 "aliases": [ 00:14:18.056 "00000000-0000-0000-0000-000000000004" 00:14:18.056 ], 00:14:18.056 "product_name": "passthru", 00:14:18.056 "block_size": 512, 00:14:18.056 "num_blocks": 65536, 00:14:18.056 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.056 "assigned_rate_limits": { 00:14:18.056 "rw_ios_per_sec": 0, 00:14:18.056 "rw_mbytes_per_sec": 0, 00:14:18.056 "r_mbytes_per_sec": 0, 00:14:18.056 "w_mbytes_per_sec": 0 00:14:18.056 }, 00:14:18.056 "claimed": true, 00:14:18.056 "claim_type": "exclusive_write", 00:14:18.056 "zoned": false, 00:14:18.056 "supported_io_types": { 00:14:18.056 "read": true, 00:14:18.056 "write": true, 00:14:18.056 "unmap": true, 00:14:18.056 "flush": true, 00:14:18.056 "reset": true, 00:14:18.056 "nvme_admin": false, 00:14:18.056 "nvme_io": false, 00:14:18.056 "nvme_io_md": false, 00:14:18.056 "write_zeroes": true, 00:14:18.056 "zcopy": true, 00:14:18.056 "get_zone_info": false, 00:14:18.056 "zone_management": false, 00:14:18.056 "zone_append": false, 00:14:18.056 "compare": false, 00:14:18.056 "compare_and_write": false, 00:14:18.056 "abort": true, 00:14:18.056 "seek_hole": false, 00:14:18.056 "seek_data": false, 00:14:18.056 "copy": true, 00:14:18.056 "nvme_iov_md": false 00:14:18.056 }, 00:14:18.056 "memory_domains": [ 00:14:18.056 { 00:14:18.056 "dma_device_id": "system", 00:14:18.056 "dma_device_type": 1 00:14:18.056 }, 00:14:18.056 { 00:14:18.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.056 "dma_device_type": 2 00:14:18.056 } 00:14:18.056 ], 00:14:18.056 "driver_specific": { 00:14:18.056 "passthru": { 00:14:18.056 "name": "pt4", 00:14:18.056 "base_bdev_name": "malloc4" 00:14:18.057 } 00:14:18.057 } 00:14:18.057 }' 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:18.057 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:18.315 [2024-07-15 18:28:10.510860] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5 '!=' f6f9c2c9-42d7-11ef-9ade-d5fc5159efa5 ']' 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 60056 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 60056 ']' 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 60056 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 60056 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:14:18.315 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:18.316 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:18.316 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60056' 00:14:18.316 killing process with pid 60056 00:14:18.316 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 60056 00:14:18.316 [2024-07-15 18:28:10.541215] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.316 [2024-07-15 18:28:10.541244] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.316 [2024-07-15 18:28:10.541261] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.316 [2024-07-15 18:28:10.541266] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x342ada634c80 name raid_bdev1, state offline 00:14:18.316 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 60056 00:14:18.316 [2024-07-15 18:28:10.569854] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.575 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:18.575 00:14:18.575 real 0m14.374s 00:14:18.575 user 0m25.592s 00:14:18.575 sys 0m2.278s 00:14:18.575 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.575 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.575 ************************************ 00:14:18.575 END TEST raid_superblock_test 00:14:18.575 ************************************ 00:14:18.575 18:28:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:18.575 18:28:10 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:14:18.575 18:28:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:18.575 18:28:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.575 18:28:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.575 ************************************ 00:14:18.575 START TEST raid_read_error_test 00:14:18.575 ************************************ 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:18.575 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.KlTZXIuyQt 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60461 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60461 /var/tmp/spdk-raid.sock 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 60461 ']' 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.576 18:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.576 [2024-07-15 18:28:10.853144] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:14:18.576 [2024-07-15 18:28:10.853325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:19.183 EAL: TSC is not safe to use in SMP mode 00:14:19.183 EAL: TSC is not invariant 00:14:19.183 [2024-07-15 18:28:11.435496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.183 [2024-07-15 18:28:11.551025] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:19.183 [2024-07-15 18:28:11.553543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.183 [2024-07-15 18:28:11.554574] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.183 [2024-07-15 18:28:11.554606] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.749 18:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.749 18:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:19.749 18:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:19.749 18:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:20.007 BaseBdev1_malloc 00:14:20.007 18:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:20.265 true 00:14:20.265 18:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:20.523 [2024-07-15 18:28:12.740852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:20.523 [2024-07-15 18:28:12.740920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.523 [2024-07-15 18:28:12.740955] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x126c1b434780 00:14:20.523 [2024-07-15 18:28:12.740964] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.523 [2024-07-15 18:28:12.741649] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.523 [2024-07-15 18:28:12.741676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.523 BaseBdev1 00:14:20.523 18:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:20.523 18:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:20.781 BaseBdev2_malloc 00:14:20.781 18:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:21.039 true 00:14:21.039 18:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:21.297 [2024-07-15 18:28:13.552904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:21.297 [2024-07-15 18:28:13.552961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.297 [2024-07-15 18:28:13.552989] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x126c1b434c80 00:14:21.297 [2024-07-15 18:28:13.552998] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.297 [2024-07-15 18:28:13.553698] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.297 [2024-07-15 18:28:13.553724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:21.297 BaseBdev2 00:14:21.297 18:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:21.297 18:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:21.554 BaseBdev3_malloc 00:14:21.554 18:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:21.812 true 00:14:21.812 18:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:22.071 [2024-07-15 18:28:14.300957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:22.071 [2024-07-15 18:28:14.301014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.071 [2024-07-15 18:28:14.301041] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x126c1b435180 00:14:22.071 [2024-07-15 18:28:14.301050] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.071 [2024-07-15 18:28:14.301740] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.071 [2024-07-15 18:28:14.301767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:22.071 BaseBdev3 00:14:22.071 18:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:22.071 18:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:22.329 BaseBdev4_malloc 00:14:22.329 18:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:22.587 true 00:14:22.587 18:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:22.846 [2024-07-15 18:28:15.017006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:22.846 [2024-07-15 18:28:15.017068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.846 [2024-07-15 18:28:15.017095] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x126c1b435680 00:14:22.846 [2024-07-15 18:28:15.017103] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.846 [2024-07-15 18:28:15.017768] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.846 [2024-07-15 18:28:15.017795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:22.846 BaseBdev4 00:14:22.846 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:23.129 [2024-07-15 18:28:15.321041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.129 [2024-07-15 18:28:15.321680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.129 [2024-07-15 18:28:15.321726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.129 [2024-07-15 18:28:15.321747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:23.129 [2024-07-15 18:28:15.321818] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x126c1b435900 00:14:23.129 [2024-07-15 18:28:15.321824] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:23.129 [2024-07-15 18:28:15.321865] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x126c1b4a0e20 00:14:23.129 [2024-07-15 18:28:15.321941] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x126c1b435900 00:14:23.129 [2024-07-15 18:28:15.321946] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x126c1b435900 00:14:23.129 [2024-07-15 18:28:15.321976] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.129 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.388 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.388 "name": "raid_bdev1", 00:14:23.388 "uuid": "001045b8-42d8-11ef-9ade-d5fc5159efa5", 00:14:23.388 "strip_size_kb": 64, 00:14:23.388 "state": "online", 00:14:23.388 "raid_level": "raid0", 00:14:23.388 "superblock": true, 00:14:23.388 "num_base_bdevs": 4, 00:14:23.388 "num_base_bdevs_discovered": 4, 00:14:23.388 "num_base_bdevs_operational": 4, 00:14:23.388 "base_bdevs_list": [ 00:14:23.388 { 00:14:23.388 "name": "BaseBdev1", 00:14:23.388 "uuid": "e01657bc-c3d3-f252-87c2-9ee61abf9655", 00:14:23.388 "is_configured": true, 00:14:23.388 "data_offset": 2048, 00:14:23.388 "data_size": 63488 00:14:23.388 }, 00:14:23.388 { 00:14:23.388 "name": "BaseBdev2", 00:14:23.388 "uuid": "4c11b890-53f1-ea56-9bd7-c17762b10f4e", 00:14:23.388 "is_configured": true, 00:14:23.388 "data_offset": 2048, 00:14:23.388 "data_size": 63488 00:14:23.388 }, 00:14:23.388 { 00:14:23.388 "name": "BaseBdev3", 00:14:23.388 "uuid": "fe571d1f-c213-065d-a055-ee45ea44f2b9", 00:14:23.388 "is_configured": true, 00:14:23.388 "data_offset": 2048, 00:14:23.388 "data_size": 63488 00:14:23.388 }, 00:14:23.388 { 00:14:23.388 "name": "BaseBdev4", 00:14:23.388 "uuid": "b9cf033b-14f6-645c-a225-0e09ea63beda", 00:14:23.388 "is_configured": true, 00:14:23.388 "data_offset": 2048, 00:14:23.388 "data_size": 63488 00:14:23.388 } 00:14:23.388 ] 00:14:23.388 }' 00:14:23.388 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.388 18:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.647 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:23.647 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:23.906 [2024-07-15 18:28:16.037309] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x126c1b4a0ec0 00:14:24.842 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.101 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.359 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.359 "name": "raid_bdev1", 00:14:25.359 "uuid": "001045b8-42d8-11ef-9ade-d5fc5159efa5", 00:14:25.359 "strip_size_kb": 64, 00:14:25.359 "state": "online", 00:14:25.359 "raid_level": "raid0", 00:14:25.359 "superblock": true, 00:14:25.359 "num_base_bdevs": 4, 00:14:25.359 "num_base_bdevs_discovered": 4, 00:14:25.359 "num_base_bdevs_operational": 4, 00:14:25.359 "base_bdevs_list": [ 00:14:25.359 { 00:14:25.359 "name": "BaseBdev1", 00:14:25.359 "uuid": "e01657bc-c3d3-f252-87c2-9ee61abf9655", 00:14:25.359 "is_configured": true, 00:14:25.359 "data_offset": 2048, 00:14:25.359 "data_size": 63488 00:14:25.359 }, 00:14:25.359 { 00:14:25.359 "name": "BaseBdev2", 00:14:25.359 "uuid": "4c11b890-53f1-ea56-9bd7-c17762b10f4e", 00:14:25.359 "is_configured": true, 00:14:25.359 "data_offset": 2048, 00:14:25.359 "data_size": 63488 00:14:25.359 }, 00:14:25.359 { 00:14:25.359 "name": "BaseBdev3", 00:14:25.359 "uuid": "fe571d1f-c213-065d-a055-ee45ea44f2b9", 00:14:25.359 "is_configured": true, 00:14:25.359 "data_offset": 2048, 00:14:25.359 "data_size": 63488 00:14:25.359 }, 00:14:25.359 { 00:14:25.359 "name": "BaseBdev4", 00:14:25.359 "uuid": "b9cf033b-14f6-645c-a225-0e09ea63beda", 00:14:25.359 "is_configured": true, 00:14:25.359 "data_offset": 2048, 00:14:25.359 "data_size": 63488 00:14:25.359 } 00:14:25.359 ] 00:14:25.359 }' 00:14:25.359 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.359 18:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.618 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:25.877 [2024-07-15 18:28:18.225711] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.877 [2024-07-15 18:28:18.225738] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.877 [2024-07-15 18:28:18.226084] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.877 [2024-07-15 18:28:18.226103] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.877 [2024-07-15 18:28:18.226113] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.877 [2024-07-15 18:28:18.226117] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x126c1b435900 name raid_bdev1, state offline 00:14:25.877 0 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60461 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 60461 ']' 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 60461 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60461 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60461' 00:14:25.877 killing process with pid 60461 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 60461 00:14:25.877 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 60461 00:14:25.877 [2024-07-15 18:28:18.255848] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.135 [2024-07-15 18:28:18.283661] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.135 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.KlTZXIuyQt 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:14:26.136 00:14:26.136 real 0m7.669s 00:14:26.136 user 0m12.263s 00:14:26.136 sys 0m1.271s 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.136 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.136 ************************************ 00:14:26.136 END TEST raid_read_error_test 00:14:26.136 ************************************ 00:14:26.420 18:28:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:26.420 18:28:18 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:26.420 18:28:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:26.420 18:28:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.420 18:28:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.420 ************************************ 00:14:26.420 START TEST raid_write_error_test 00:14:26.420 ************************************ 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3q6MbTOg1Q 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60599 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60599 /var/tmp/spdk-raid.sock 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 60599 ']' 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.420 18:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.420 [2024-07-15 18:28:18.566474] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:14:26.420 [2024-07-15 18:28:18.566713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:26.996 EAL: TSC is not safe to use in SMP mode 00:14:26.996 EAL: TSC is not invariant 00:14:26.996 [2024-07-15 18:28:19.167483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.996 [2024-07-15 18:28:19.276012] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:26.996 [2024-07-15 18:28:19.278172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.996 [2024-07-15 18:28:19.278967] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.996 [2024-07-15 18:28:19.278982] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.565 18:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.565 18:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:27.565 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:27.565 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:27.565 BaseBdev1_malloc 00:14:27.823 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:27.823 true 00:14:27.823 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:28.389 [2024-07-15 18:28:20.471586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:28.389 [2024-07-15 18:28:20.471656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.389 [2024-07-15 18:28:20.471688] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f12a0234780 00:14:28.389 [2024-07-15 18:28:20.471697] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.389 [2024-07-15 18:28:20.472419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.389 [2024-07-15 18:28:20.472446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.389 BaseBdev1 00:14:28.389 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:28.389 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:28.389 BaseBdev2_malloc 00:14:28.389 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:28.955 true 00:14:28.955 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:28.955 [2024-07-15 18:28:21.307633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:28.955 [2024-07-15 18:28:21.307712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.955 [2024-07-15 18:28:21.307739] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f12a0234c80 00:14:28.955 [2024-07-15 18:28:21.307748] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.955 [2024-07-15 18:28:21.308473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.955 [2024-07-15 18:28:21.308500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:28.955 BaseBdev2 00:14:28.955 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:28.955 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:29.214 BaseBdev3_malloc 00:14:29.214 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:29.473 true 00:14:29.473 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:30.040 [2024-07-15 18:28:22.143698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:30.040 [2024-07-15 18:28:22.143764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.040 [2024-07-15 18:28:22.143793] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f12a0235180 00:14:30.040 [2024-07-15 18:28:22.143802] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.040 [2024-07-15 18:28:22.144516] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.040 [2024-07-15 18:28:22.144543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.040 BaseBdev3 00:14:30.040 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:30.040 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:30.040 BaseBdev4_malloc 00:14:30.040 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:30.298 true 00:14:30.557 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:30.557 [2024-07-15 18:28:22.919748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:30.557 [2024-07-15 18:28:22.919807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.557 [2024-07-15 18:28:22.919835] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f12a0235680 00:14:30.557 [2024-07-15 18:28:22.919844] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.557 [2024-07-15 18:28:22.920557] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.557 [2024-07-15 18:28:22.920584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.557 BaseBdev4 00:14:30.557 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:30.816 [2024-07-15 18:28:23.179783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.816 [2024-07-15 18:28:23.180422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.816 [2024-07-15 18:28:23.180452] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.816 [2024-07-15 18:28:23.180469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.816 [2024-07-15 18:28:23.180539] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f12a0235900 00:14:30.816 [2024-07-15 18:28:23.180545] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:30.816 [2024-07-15 18:28:23.180587] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f12a02a0e20 00:14:30.816 [2024-07-15 18:28:23.180671] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f12a0235900 00:14:30.816 [2024-07-15 18:28:23.180676] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f12a0235900 00:14:30.816 [2024-07-15 18:28:23.180704] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.816 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.383 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.383 "name": "raid_bdev1", 00:14:31.383 "uuid": "04bf6be2-42d8-11ef-9ade-d5fc5159efa5", 00:14:31.383 "strip_size_kb": 64, 00:14:31.383 "state": "online", 00:14:31.383 "raid_level": "raid0", 00:14:31.383 "superblock": true, 00:14:31.383 "num_base_bdevs": 4, 00:14:31.383 "num_base_bdevs_discovered": 4, 00:14:31.383 "num_base_bdevs_operational": 4, 00:14:31.383 "base_bdevs_list": [ 00:14:31.383 { 00:14:31.383 "name": "BaseBdev1", 00:14:31.383 "uuid": "a7f91c93-c3fc-9f5a-a8de-0b71c24203c5", 00:14:31.383 "is_configured": true, 00:14:31.383 "data_offset": 2048, 00:14:31.383 "data_size": 63488 00:14:31.383 }, 00:14:31.383 { 00:14:31.383 "name": "BaseBdev2", 00:14:31.383 "uuid": "a695ae0c-ceff-675a-ac98-b9f7c4f0eb08", 00:14:31.383 "is_configured": true, 00:14:31.383 "data_offset": 2048, 00:14:31.383 "data_size": 63488 00:14:31.383 }, 00:14:31.383 { 00:14:31.383 "name": "BaseBdev3", 00:14:31.383 "uuid": "485c7ae6-eb15-b55b-bfb0-d629f29bc148", 00:14:31.383 "is_configured": true, 00:14:31.383 "data_offset": 2048, 00:14:31.383 "data_size": 63488 00:14:31.383 }, 00:14:31.383 { 00:14:31.383 "name": "BaseBdev4", 00:14:31.383 "uuid": "aef051c9-56bc-3e5a-953a-f4c5eb0f9772", 00:14:31.383 "is_configured": true, 00:14:31.383 "data_offset": 2048, 00:14:31.383 "data_size": 63488 00:14:31.383 } 00:14:31.383 ] 00:14:31.383 }' 00:14:31.383 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.383 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.641 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:31.641 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:31.641 [2024-07-15 18:28:23.960037] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f12a02a0ec0 00:14:32.573 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.831 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.117 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.117 "name": "raid_bdev1", 00:14:33.117 "uuid": "04bf6be2-42d8-11ef-9ade-d5fc5159efa5", 00:14:33.117 "strip_size_kb": 64, 00:14:33.117 "state": "online", 00:14:33.117 "raid_level": "raid0", 00:14:33.117 "superblock": true, 00:14:33.117 "num_base_bdevs": 4, 00:14:33.117 "num_base_bdevs_discovered": 4, 00:14:33.117 "num_base_bdevs_operational": 4, 00:14:33.117 "base_bdevs_list": [ 00:14:33.117 { 00:14:33.117 "name": "BaseBdev1", 00:14:33.117 "uuid": "a7f91c93-c3fc-9f5a-a8de-0b71c24203c5", 00:14:33.118 "is_configured": true, 00:14:33.118 "data_offset": 2048, 00:14:33.118 "data_size": 63488 00:14:33.118 }, 00:14:33.118 { 00:14:33.118 "name": "BaseBdev2", 00:14:33.118 "uuid": "a695ae0c-ceff-675a-ac98-b9f7c4f0eb08", 00:14:33.118 "is_configured": true, 00:14:33.118 "data_offset": 2048, 00:14:33.118 "data_size": 63488 00:14:33.118 }, 00:14:33.118 { 00:14:33.118 "name": "BaseBdev3", 00:14:33.118 "uuid": "485c7ae6-eb15-b55b-bfb0-d629f29bc148", 00:14:33.118 "is_configured": true, 00:14:33.118 "data_offset": 2048, 00:14:33.118 "data_size": 63488 00:14:33.118 }, 00:14:33.118 { 00:14:33.118 "name": "BaseBdev4", 00:14:33.118 "uuid": "aef051c9-56bc-3e5a-953a-f4c5eb0f9772", 00:14:33.118 "is_configured": true, 00:14:33.118 "data_offset": 2048, 00:14:33.118 "data_size": 63488 00:14:33.118 } 00:14:33.118 ] 00:14:33.118 }' 00:14:33.118 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.118 18:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.376 18:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:33.633 [2024-07-15 18:28:25.987606] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.633 [2024-07-15 18:28:25.987634] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.633 [2024-07-15 18:28:25.987988] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.633 [2024-07-15 18:28:25.988000] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.633 [2024-07-15 18:28:25.988010] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.633 [2024-07-15 18:28:25.988014] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f12a0235900 name raid_bdev1, state offline 00:14:33.633 0 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60599 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 60599 ']' 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 60599 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60599 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:33.633 killing process with pid 60599 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60599' 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 60599 00:14:33.633 [2024-07-15 18:28:26.015923] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.633 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 60599 00:14:33.890 [2024-07-15 18:28:26.044119] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.890 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:33.890 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3q6MbTOg1Q 00:14:33.890 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:33.890 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:14:33.890 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:33.890 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:33.891 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:33.891 18:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:14:33.891 00:14:33.891 real 0m7.719s 00:14:33.891 user 0m12.329s 00:14:33.891 sys 0m1.329s 00:14:33.891 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.891 18:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.891 ************************************ 00:14:33.891 END TEST raid_write_error_test 00:14:33.891 ************************************ 00:14:34.148 18:28:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:34.148 18:28:26 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:34.148 18:28:26 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:34.148 18:28:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:34.148 18:28:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.148 18:28:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.148 ************************************ 00:14:34.148 START TEST raid_state_function_test 00:14:34.148 ************************************ 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:34.148 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=60739 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:34.149 Process raid pid: 60739 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 60739' 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 60739 /var/tmp/spdk-raid.sock 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 60739 ']' 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.149 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.149 [2024-07-15 18:28:26.326592] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:14:34.149 [2024-07-15 18:28:26.326737] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:34.714 EAL: TSC is not safe to use in SMP mode 00:14:34.714 EAL: TSC is not invariant 00:14:34.714 [2024-07-15 18:28:26.917875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.714 [2024-07-15 18:28:27.038383] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:34.714 [2024-07-15 18:28:27.040963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.714 [2024-07-15 18:28:27.041884] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.714 [2024-07-15 18:28:27.041900] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.281 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.281 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:35.281 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:35.281 [2024-07-15 18:28:27.651456] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.281 [2024-07-15 18:28:27.651506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.281 [2024-07-15 18:28:27.651511] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.281 [2024-07-15 18:28:27.651520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.281 [2024-07-15 18:28:27.651524] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.281 [2024-07-15 18:28:27.651531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.281 [2024-07-15 18:28:27.651535] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.281 [2024-07-15 18:28:27.651542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.281 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:35.281 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.567 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.834 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.834 "name": "Existed_Raid", 00:14:35.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.834 "strip_size_kb": 64, 00:14:35.834 "state": "configuring", 00:14:35.834 "raid_level": "concat", 00:14:35.834 "superblock": false, 00:14:35.834 "num_base_bdevs": 4, 00:14:35.834 "num_base_bdevs_discovered": 0, 00:14:35.834 "num_base_bdevs_operational": 4, 00:14:35.834 "base_bdevs_list": [ 00:14:35.834 { 00:14:35.834 "name": "BaseBdev1", 00:14:35.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.834 "is_configured": false, 00:14:35.834 "data_offset": 0, 00:14:35.834 "data_size": 0 00:14:35.834 }, 00:14:35.834 { 00:14:35.834 "name": "BaseBdev2", 00:14:35.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.834 "is_configured": false, 00:14:35.834 "data_offset": 0, 00:14:35.834 "data_size": 0 00:14:35.834 }, 00:14:35.834 { 00:14:35.834 "name": "BaseBdev3", 00:14:35.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.834 "is_configured": false, 00:14:35.834 "data_offset": 0, 00:14:35.834 "data_size": 0 00:14:35.834 }, 00:14:35.834 { 00:14:35.834 "name": "BaseBdev4", 00:14:35.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.834 "is_configured": false, 00:14:35.834 "data_offset": 0, 00:14:35.834 "data_size": 0 00:14:35.834 } 00:14:35.834 ] 00:14:35.834 }' 00:14:35.834 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.834 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.093 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:36.351 [2024-07-15 18:28:28.535501] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.352 [2024-07-15 18:28:28.535528] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32ad92a34500 name Existed_Raid, state configuring 00:14:36.352 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:36.610 [2024-07-15 18:28:28.783524] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.610 [2024-07-15 18:28:28.783568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.610 [2024-07-15 18:28:28.783573] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.610 [2024-07-15 18:28:28.783581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.610 [2024-07-15 18:28:28.783585] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.610 [2024-07-15 18:28:28.783592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.610 [2024-07-15 18:28:28.783596] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.610 [2024-07-15 18:28:28.783603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.610 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.868 [2024-07-15 18:28:29.036639] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.868 BaseBdev1 00:14:36.868 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:36.868 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:36.868 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:36.868 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:36.868 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:36.868 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:36.868 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.127 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:37.386 [ 00:14:37.386 { 00:14:37.386 "name": "BaseBdev1", 00:14:37.386 "aliases": [ 00:14:37.386 "083cf1cc-42d8-11ef-9ade-d5fc5159efa5" 00:14:37.386 ], 00:14:37.386 "product_name": "Malloc disk", 00:14:37.386 "block_size": 512, 00:14:37.386 "num_blocks": 65536, 00:14:37.386 "uuid": "083cf1cc-42d8-11ef-9ade-d5fc5159efa5", 00:14:37.386 "assigned_rate_limits": { 00:14:37.386 "rw_ios_per_sec": 0, 00:14:37.386 "rw_mbytes_per_sec": 0, 00:14:37.386 "r_mbytes_per_sec": 0, 00:14:37.386 "w_mbytes_per_sec": 0 00:14:37.386 }, 00:14:37.386 "claimed": true, 00:14:37.386 "claim_type": "exclusive_write", 00:14:37.386 "zoned": false, 00:14:37.386 "supported_io_types": { 00:14:37.386 "read": true, 00:14:37.386 "write": true, 00:14:37.386 "unmap": true, 00:14:37.386 "flush": true, 00:14:37.386 "reset": true, 00:14:37.386 "nvme_admin": false, 00:14:37.386 "nvme_io": false, 00:14:37.386 "nvme_io_md": false, 00:14:37.386 "write_zeroes": true, 00:14:37.386 "zcopy": true, 00:14:37.386 "get_zone_info": false, 00:14:37.386 "zone_management": false, 00:14:37.386 "zone_append": false, 00:14:37.386 "compare": false, 00:14:37.386 "compare_and_write": false, 00:14:37.386 "abort": true, 00:14:37.386 "seek_hole": false, 00:14:37.386 "seek_data": false, 00:14:37.386 "copy": true, 00:14:37.386 "nvme_iov_md": false 00:14:37.386 }, 00:14:37.386 "memory_domains": [ 00:14:37.386 { 00:14:37.386 "dma_device_id": "system", 00:14:37.386 "dma_device_type": 1 00:14:37.386 }, 00:14:37.386 { 00:14:37.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.386 "dma_device_type": 2 00:14:37.386 } 00:14:37.386 ], 00:14:37.386 "driver_specific": {} 00:14:37.386 } 00:14:37.386 ] 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.386 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.645 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.645 "name": "Existed_Raid", 00:14:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.645 "strip_size_kb": 64, 00:14:37.646 "state": "configuring", 00:14:37.646 "raid_level": "concat", 00:14:37.646 "superblock": false, 00:14:37.646 "num_base_bdevs": 4, 00:14:37.646 "num_base_bdevs_discovered": 1, 00:14:37.646 "num_base_bdevs_operational": 4, 00:14:37.646 "base_bdevs_list": [ 00:14:37.646 { 00:14:37.646 "name": "BaseBdev1", 00:14:37.646 "uuid": "083cf1cc-42d8-11ef-9ade-d5fc5159efa5", 00:14:37.646 "is_configured": true, 00:14:37.646 "data_offset": 0, 00:14:37.646 "data_size": 65536 00:14:37.646 }, 00:14:37.646 { 00:14:37.646 "name": "BaseBdev2", 00:14:37.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.646 "is_configured": false, 00:14:37.646 "data_offset": 0, 00:14:37.646 "data_size": 0 00:14:37.646 }, 00:14:37.646 { 00:14:37.646 "name": "BaseBdev3", 00:14:37.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.646 "is_configured": false, 00:14:37.646 "data_offset": 0, 00:14:37.646 "data_size": 0 00:14:37.646 }, 00:14:37.646 { 00:14:37.646 "name": "BaseBdev4", 00:14:37.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.646 "is_configured": false, 00:14:37.646 "data_offset": 0, 00:14:37.646 "data_size": 0 00:14:37.646 } 00:14:37.646 ] 00:14:37.646 }' 00:14:37.646 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.646 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.904 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:38.163 [2024-07-15 18:28:30.447686] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.163 [2024-07-15 18:28:30.447741] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32ad92a34500 name Existed_Raid, state configuring 00:14:38.163 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:38.421 [2024-07-15 18:28:30.727718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.421 [2024-07-15 18:28:30.728794] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.421 [2024-07-15 18:28:30.728859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.421 [2024-07-15 18:28:30.728865] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:38.421 [2024-07-15 18:28:30.728874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:38.421 [2024-07-15 18:28:30.728878] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:38.421 [2024-07-15 18:28:30.728886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.421 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.680 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.680 "name": "Existed_Raid", 00:14:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.680 "strip_size_kb": 64, 00:14:38.680 "state": "configuring", 00:14:38.680 "raid_level": "concat", 00:14:38.680 "superblock": false, 00:14:38.680 "num_base_bdevs": 4, 00:14:38.680 "num_base_bdevs_discovered": 1, 00:14:38.680 "num_base_bdevs_operational": 4, 00:14:38.680 "base_bdevs_list": [ 00:14:38.680 { 00:14:38.680 "name": "BaseBdev1", 00:14:38.680 "uuid": "083cf1cc-42d8-11ef-9ade-d5fc5159efa5", 00:14:38.680 "is_configured": true, 00:14:38.680 "data_offset": 0, 00:14:38.680 "data_size": 65536 00:14:38.680 }, 00:14:38.680 { 00:14:38.680 "name": "BaseBdev2", 00:14:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.680 "is_configured": false, 00:14:38.680 "data_offset": 0, 00:14:38.680 "data_size": 0 00:14:38.680 }, 00:14:38.680 { 00:14:38.680 "name": "BaseBdev3", 00:14:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.680 "is_configured": false, 00:14:38.680 "data_offset": 0, 00:14:38.680 "data_size": 0 00:14:38.680 }, 00:14:38.680 { 00:14:38.680 "name": "BaseBdev4", 00:14:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.680 "is_configured": false, 00:14:38.680 "data_offset": 0, 00:14:38.680 "data_size": 0 00:14:38.680 } 00:14:38.680 ] 00:14:38.680 }' 00:14:38.680 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.680 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.938 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.196 [2024-07-15 18:28:31.539995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.196 BaseBdev2 00:14:39.196 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:39.196 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:39.196 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:39.196 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:39.196 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:39.196 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:39.196 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.455 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:39.715 [ 00:14:39.715 { 00:14:39.715 "name": "BaseBdev2", 00:14:39.715 "aliases": [ 00:14:39.715 "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5" 00:14:39.715 ], 00:14:39.715 "product_name": "Malloc disk", 00:14:39.715 "block_size": 512, 00:14:39.715 "num_blocks": 65536, 00:14:39.715 "uuid": "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5", 00:14:39.715 "assigned_rate_limits": { 00:14:39.715 "rw_ios_per_sec": 0, 00:14:39.715 "rw_mbytes_per_sec": 0, 00:14:39.715 "r_mbytes_per_sec": 0, 00:14:39.715 "w_mbytes_per_sec": 0 00:14:39.715 }, 00:14:39.715 "claimed": true, 00:14:39.715 "claim_type": "exclusive_write", 00:14:39.715 "zoned": false, 00:14:39.715 "supported_io_types": { 00:14:39.715 "read": true, 00:14:39.715 "write": true, 00:14:39.715 "unmap": true, 00:14:39.715 "flush": true, 00:14:39.715 "reset": true, 00:14:39.715 "nvme_admin": false, 00:14:39.715 "nvme_io": false, 00:14:39.715 "nvme_io_md": false, 00:14:39.715 "write_zeroes": true, 00:14:39.715 "zcopy": true, 00:14:39.715 "get_zone_info": false, 00:14:39.715 "zone_management": false, 00:14:39.715 "zone_append": false, 00:14:39.715 "compare": false, 00:14:39.715 "compare_and_write": false, 00:14:39.715 "abort": true, 00:14:39.715 "seek_hole": false, 00:14:39.715 "seek_data": false, 00:14:39.715 "copy": true, 00:14:39.715 "nvme_iov_md": false 00:14:39.715 }, 00:14:39.715 "memory_domains": [ 00:14:39.715 { 00:14:39.715 "dma_device_id": "system", 00:14:39.715 "dma_device_type": 1 00:14:39.715 }, 00:14:39.715 { 00:14:39.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.715 "dma_device_type": 2 00:14:39.715 } 00:14:39.715 ], 00:14:39.715 "driver_specific": {} 00:14:39.715 } 00:14:39.715 ] 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.715 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.282 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.282 "name": "Existed_Raid", 00:14:40.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.282 "strip_size_kb": 64, 00:14:40.282 "state": "configuring", 00:14:40.282 "raid_level": "concat", 00:14:40.282 "superblock": false, 00:14:40.282 "num_base_bdevs": 4, 00:14:40.282 "num_base_bdevs_discovered": 2, 00:14:40.282 "num_base_bdevs_operational": 4, 00:14:40.282 "base_bdevs_list": [ 00:14:40.282 { 00:14:40.282 "name": "BaseBdev1", 00:14:40.282 "uuid": "083cf1cc-42d8-11ef-9ade-d5fc5159efa5", 00:14:40.282 "is_configured": true, 00:14:40.282 "data_offset": 0, 00:14:40.282 "data_size": 65536 00:14:40.282 }, 00:14:40.282 { 00:14:40.282 "name": "BaseBdev2", 00:14:40.282 "uuid": "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5", 00:14:40.282 "is_configured": true, 00:14:40.282 "data_offset": 0, 00:14:40.282 "data_size": 65536 00:14:40.282 }, 00:14:40.282 { 00:14:40.282 "name": "BaseBdev3", 00:14:40.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.282 "is_configured": false, 00:14:40.282 "data_offset": 0, 00:14:40.282 "data_size": 0 00:14:40.282 }, 00:14:40.282 { 00:14:40.282 "name": "BaseBdev4", 00:14:40.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.282 "is_configured": false, 00:14:40.282 "data_offset": 0, 00:14:40.282 "data_size": 0 00:14:40.282 } 00:14:40.282 ] 00:14:40.282 }' 00:14:40.282 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.282 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.539 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.797 [2024-07-15 18:28:33.004136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.797 BaseBdev3 00:14:40.797 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:40.797 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:40.797 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:40.797 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:40.797 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:40.797 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:40.797 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:41.053 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:41.310 [ 00:14:41.310 { 00:14:41.310 "name": "BaseBdev3", 00:14:41.310 "aliases": [ 00:14:41.310 "0a9a7828-42d8-11ef-9ade-d5fc5159efa5" 00:14:41.310 ], 00:14:41.310 "product_name": "Malloc disk", 00:14:41.310 "block_size": 512, 00:14:41.310 "num_blocks": 65536, 00:14:41.310 "uuid": "0a9a7828-42d8-11ef-9ade-d5fc5159efa5", 00:14:41.310 "assigned_rate_limits": { 00:14:41.310 "rw_ios_per_sec": 0, 00:14:41.310 "rw_mbytes_per_sec": 0, 00:14:41.310 "r_mbytes_per_sec": 0, 00:14:41.310 "w_mbytes_per_sec": 0 00:14:41.310 }, 00:14:41.310 "claimed": true, 00:14:41.310 "claim_type": "exclusive_write", 00:14:41.310 "zoned": false, 00:14:41.310 "supported_io_types": { 00:14:41.310 "read": true, 00:14:41.310 "write": true, 00:14:41.310 "unmap": true, 00:14:41.310 "flush": true, 00:14:41.310 "reset": true, 00:14:41.310 "nvme_admin": false, 00:14:41.310 "nvme_io": false, 00:14:41.310 "nvme_io_md": false, 00:14:41.310 "write_zeroes": true, 00:14:41.310 "zcopy": true, 00:14:41.310 "get_zone_info": false, 00:14:41.310 "zone_management": false, 00:14:41.310 "zone_append": false, 00:14:41.310 "compare": false, 00:14:41.310 "compare_and_write": false, 00:14:41.310 "abort": true, 00:14:41.310 "seek_hole": false, 00:14:41.310 "seek_data": false, 00:14:41.310 "copy": true, 00:14:41.310 "nvme_iov_md": false 00:14:41.310 }, 00:14:41.310 "memory_domains": [ 00:14:41.310 { 00:14:41.310 "dma_device_id": "system", 00:14:41.310 "dma_device_type": 1 00:14:41.310 }, 00:14:41.310 { 00:14:41.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.310 "dma_device_type": 2 00:14:41.310 } 00:14:41.310 ], 00:14:41.310 "driver_specific": {} 00:14:41.310 } 00:14:41.310 ] 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.310 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.568 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.568 "name": "Existed_Raid", 00:14:41.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.568 "strip_size_kb": 64, 00:14:41.568 "state": "configuring", 00:14:41.568 "raid_level": "concat", 00:14:41.568 "superblock": false, 00:14:41.568 "num_base_bdevs": 4, 00:14:41.568 "num_base_bdevs_discovered": 3, 00:14:41.568 "num_base_bdevs_operational": 4, 00:14:41.568 "base_bdevs_list": [ 00:14:41.568 { 00:14:41.568 "name": "BaseBdev1", 00:14:41.568 "uuid": "083cf1cc-42d8-11ef-9ade-d5fc5159efa5", 00:14:41.568 "is_configured": true, 00:14:41.568 "data_offset": 0, 00:14:41.568 "data_size": 65536 00:14:41.568 }, 00:14:41.568 { 00:14:41.568 "name": "BaseBdev2", 00:14:41.568 "uuid": "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5", 00:14:41.568 "is_configured": true, 00:14:41.568 "data_offset": 0, 00:14:41.568 "data_size": 65536 00:14:41.568 }, 00:14:41.568 { 00:14:41.568 "name": "BaseBdev3", 00:14:41.568 "uuid": "0a9a7828-42d8-11ef-9ade-d5fc5159efa5", 00:14:41.568 "is_configured": true, 00:14:41.568 "data_offset": 0, 00:14:41.568 "data_size": 65536 00:14:41.568 }, 00:14:41.568 { 00:14:41.568 "name": "BaseBdev4", 00:14:41.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.568 "is_configured": false, 00:14:41.568 "data_offset": 0, 00:14:41.568 "data_size": 0 00:14:41.568 } 00:14:41.568 ] 00:14:41.568 }' 00:14:41.568 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.568 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.825 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:42.083 [2024-07-15 18:28:34.400139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:42.083 [2024-07-15 18:28:34.400172] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x32ad92a34a00 00:14:42.083 [2024-07-15 18:28:34.400177] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:42.083 [2024-07-15 18:28:34.400210] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x32ad92a97e20 00:14:42.083 [2024-07-15 18:28:34.400312] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x32ad92a34a00 00:14:42.083 [2024-07-15 18:28:34.400316] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x32ad92a34a00 00:14:42.083 [2024-07-15 18:28:34.400350] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.083 BaseBdev4 00:14:42.083 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:42.083 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:42.083 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:42.083 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:42.083 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:42.083 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:42.083 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.340 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:42.598 [ 00:14:42.598 { 00:14:42.598 "name": "BaseBdev4", 00:14:42.598 "aliases": [ 00:14:42.598 "0b6f7d98-42d8-11ef-9ade-d5fc5159efa5" 00:14:42.598 ], 00:14:42.598 "product_name": "Malloc disk", 00:14:42.598 "block_size": 512, 00:14:42.598 "num_blocks": 65536, 00:14:42.598 "uuid": "0b6f7d98-42d8-11ef-9ade-d5fc5159efa5", 00:14:42.598 "assigned_rate_limits": { 00:14:42.598 "rw_ios_per_sec": 0, 00:14:42.598 "rw_mbytes_per_sec": 0, 00:14:42.598 "r_mbytes_per_sec": 0, 00:14:42.598 "w_mbytes_per_sec": 0 00:14:42.598 }, 00:14:42.598 "claimed": true, 00:14:42.598 "claim_type": "exclusive_write", 00:14:42.598 "zoned": false, 00:14:42.598 "supported_io_types": { 00:14:42.598 "read": true, 00:14:42.598 "write": true, 00:14:42.598 "unmap": true, 00:14:42.598 "flush": true, 00:14:42.598 "reset": true, 00:14:42.598 "nvme_admin": false, 00:14:42.598 "nvme_io": false, 00:14:42.598 "nvme_io_md": false, 00:14:42.598 "write_zeroes": true, 00:14:42.598 "zcopy": true, 00:14:42.598 "get_zone_info": false, 00:14:42.598 "zone_management": false, 00:14:42.598 "zone_append": false, 00:14:42.598 "compare": false, 00:14:42.598 "compare_and_write": false, 00:14:42.598 "abort": true, 00:14:42.598 "seek_hole": false, 00:14:42.598 "seek_data": false, 00:14:42.598 "copy": true, 00:14:42.598 "nvme_iov_md": false 00:14:42.598 }, 00:14:42.598 "memory_domains": [ 00:14:42.598 { 00:14:42.598 "dma_device_id": "system", 00:14:42.598 "dma_device_type": 1 00:14:42.598 }, 00:14:42.598 { 00:14:42.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.598 "dma_device_type": 2 00:14:42.598 } 00:14:42.598 ], 00:14:42.598 "driver_specific": {} 00:14:42.598 } 00:14:42.598 ] 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.598 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.856 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.856 "name": "Existed_Raid", 00:14:42.856 "uuid": "0b6f8577-42d8-11ef-9ade-d5fc5159efa5", 00:14:42.856 "strip_size_kb": 64, 00:14:42.856 "state": "online", 00:14:42.856 "raid_level": "concat", 00:14:42.856 "superblock": false, 00:14:42.856 "num_base_bdevs": 4, 00:14:42.856 "num_base_bdevs_discovered": 4, 00:14:42.856 "num_base_bdevs_operational": 4, 00:14:42.856 "base_bdevs_list": [ 00:14:42.856 { 00:14:42.856 "name": "BaseBdev1", 00:14:42.856 "uuid": "083cf1cc-42d8-11ef-9ade-d5fc5159efa5", 00:14:42.856 "is_configured": true, 00:14:42.856 "data_offset": 0, 00:14:42.856 "data_size": 65536 00:14:42.856 }, 00:14:42.856 { 00:14:42.856 "name": "BaseBdev2", 00:14:42.856 "uuid": "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5", 00:14:42.856 "is_configured": true, 00:14:42.856 "data_offset": 0, 00:14:42.856 "data_size": 65536 00:14:42.856 }, 00:14:42.856 { 00:14:42.856 "name": "BaseBdev3", 00:14:42.856 "uuid": "0a9a7828-42d8-11ef-9ade-d5fc5159efa5", 00:14:42.856 "is_configured": true, 00:14:42.856 "data_offset": 0, 00:14:42.856 "data_size": 65536 00:14:42.856 }, 00:14:42.856 { 00:14:42.856 "name": "BaseBdev4", 00:14:42.856 "uuid": "0b6f7d98-42d8-11ef-9ade-d5fc5159efa5", 00:14:42.856 "is_configured": true, 00:14:42.856 "data_offset": 0, 00:14:42.856 "data_size": 65536 00:14:42.856 } 00:14:42.856 ] 00:14:42.856 }' 00:14:42.856 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.856 18:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.422 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.422 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:43.422 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:43.422 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:43.422 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:43.422 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:43.422 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:43.422 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:43.422 [2024-07-15 18:28:35.808147] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.681 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:43.681 "name": "Existed_Raid", 00:14:43.681 "aliases": [ 00:14:43.681 "0b6f8577-42d8-11ef-9ade-d5fc5159efa5" 00:14:43.681 ], 00:14:43.681 "product_name": "Raid Volume", 00:14:43.681 "block_size": 512, 00:14:43.681 "num_blocks": 262144, 00:14:43.681 "uuid": "0b6f8577-42d8-11ef-9ade-d5fc5159efa5", 00:14:43.681 "assigned_rate_limits": { 00:14:43.681 "rw_ios_per_sec": 0, 00:14:43.681 "rw_mbytes_per_sec": 0, 00:14:43.681 "r_mbytes_per_sec": 0, 00:14:43.681 "w_mbytes_per_sec": 0 00:14:43.681 }, 00:14:43.681 "claimed": false, 00:14:43.681 "zoned": false, 00:14:43.681 "supported_io_types": { 00:14:43.681 "read": true, 00:14:43.681 "write": true, 00:14:43.681 "unmap": true, 00:14:43.681 "flush": true, 00:14:43.681 "reset": true, 00:14:43.681 "nvme_admin": false, 00:14:43.681 "nvme_io": false, 00:14:43.681 "nvme_io_md": false, 00:14:43.681 "write_zeroes": true, 00:14:43.681 "zcopy": false, 00:14:43.681 "get_zone_info": false, 00:14:43.681 "zone_management": false, 00:14:43.681 "zone_append": false, 00:14:43.681 "compare": false, 00:14:43.681 "compare_and_write": false, 00:14:43.681 "abort": false, 00:14:43.681 "seek_hole": false, 00:14:43.681 "seek_data": false, 00:14:43.681 "copy": false, 00:14:43.681 "nvme_iov_md": false 00:14:43.681 }, 00:14:43.681 "memory_domains": [ 00:14:43.681 { 00:14:43.681 "dma_device_id": "system", 00:14:43.681 "dma_device_type": 1 00:14:43.681 }, 00:14:43.681 { 00:14:43.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.681 "dma_device_type": 2 00:14:43.681 }, 00:14:43.681 { 00:14:43.681 "dma_device_id": "system", 00:14:43.681 "dma_device_type": 1 00:14:43.681 }, 00:14:43.681 { 00:14:43.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.681 "dma_device_type": 2 00:14:43.681 }, 00:14:43.681 { 00:14:43.681 "dma_device_id": "system", 00:14:43.681 "dma_device_type": 1 00:14:43.681 }, 00:14:43.682 { 00:14:43.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.682 "dma_device_type": 2 00:14:43.682 }, 00:14:43.682 { 00:14:43.682 "dma_device_id": "system", 00:14:43.682 "dma_device_type": 1 00:14:43.682 }, 00:14:43.682 { 00:14:43.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.682 "dma_device_type": 2 00:14:43.682 } 00:14:43.682 ], 00:14:43.682 "driver_specific": { 00:14:43.682 "raid": { 00:14:43.682 "uuid": "0b6f8577-42d8-11ef-9ade-d5fc5159efa5", 00:14:43.682 "strip_size_kb": 64, 00:14:43.682 "state": "online", 00:14:43.682 "raid_level": "concat", 00:14:43.682 "superblock": false, 00:14:43.682 "num_base_bdevs": 4, 00:14:43.682 "num_base_bdevs_discovered": 4, 00:14:43.682 "num_base_bdevs_operational": 4, 00:14:43.682 "base_bdevs_list": [ 00:14:43.682 { 00:14:43.682 "name": "BaseBdev1", 00:14:43.682 "uuid": "083cf1cc-42d8-11ef-9ade-d5fc5159efa5", 00:14:43.682 "is_configured": true, 00:14:43.682 "data_offset": 0, 00:14:43.682 "data_size": 65536 00:14:43.682 }, 00:14:43.682 { 00:14:43.682 "name": "BaseBdev2", 00:14:43.682 "uuid": "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5", 00:14:43.682 "is_configured": true, 00:14:43.682 "data_offset": 0, 00:14:43.682 "data_size": 65536 00:14:43.682 }, 00:14:43.682 { 00:14:43.682 "name": "BaseBdev3", 00:14:43.682 "uuid": "0a9a7828-42d8-11ef-9ade-d5fc5159efa5", 00:14:43.682 "is_configured": true, 00:14:43.682 "data_offset": 0, 00:14:43.682 "data_size": 65536 00:14:43.682 }, 00:14:43.682 { 00:14:43.682 "name": "BaseBdev4", 00:14:43.682 "uuid": "0b6f7d98-42d8-11ef-9ade-d5fc5159efa5", 00:14:43.682 "is_configured": true, 00:14:43.682 "data_offset": 0, 00:14:43.682 "data_size": 65536 00:14:43.682 } 00:14:43.682 ] 00:14:43.682 } 00:14:43.682 } 00:14:43.682 }' 00:14:43.682 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.682 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:43.682 BaseBdev2 00:14:43.682 BaseBdev3 00:14:43.682 BaseBdev4' 00:14:43.682 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:43.682 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:43.682 18:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:43.941 "name": "BaseBdev1", 00:14:43.941 "aliases": [ 00:14:43.941 "083cf1cc-42d8-11ef-9ade-d5fc5159efa5" 00:14:43.941 ], 00:14:43.941 "product_name": "Malloc disk", 00:14:43.941 "block_size": 512, 00:14:43.941 "num_blocks": 65536, 00:14:43.941 "uuid": "083cf1cc-42d8-11ef-9ade-d5fc5159efa5", 00:14:43.941 "assigned_rate_limits": { 00:14:43.941 "rw_ios_per_sec": 0, 00:14:43.941 "rw_mbytes_per_sec": 0, 00:14:43.941 "r_mbytes_per_sec": 0, 00:14:43.941 "w_mbytes_per_sec": 0 00:14:43.941 }, 00:14:43.941 "claimed": true, 00:14:43.941 "claim_type": "exclusive_write", 00:14:43.941 "zoned": false, 00:14:43.941 "supported_io_types": { 00:14:43.941 "read": true, 00:14:43.941 "write": true, 00:14:43.941 "unmap": true, 00:14:43.941 "flush": true, 00:14:43.941 "reset": true, 00:14:43.941 "nvme_admin": false, 00:14:43.941 "nvme_io": false, 00:14:43.941 "nvme_io_md": false, 00:14:43.941 "write_zeroes": true, 00:14:43.941 "zcopy": true, 00:14:43.941 "get_zone_info": false, 00:14:43.941 "zone_management": false, 00:14:43.941 "zone_append": false, 00:14:43.941 "compare": false, 00:14:43.941 "compare_and_write": false, 00:14:43.941 "abort": true, 00:14:43.941 "seek_hole": false, 00:14:43.941 "seek_data": false, 00:14:43.941 "copy": true, 00:14:43.941 "nvme_iov_md": false 00:14:43.941 }, 00:14:43.941 "memory_domains": [ 00:14:43.941 { 00:14:43.941 "dma_device_id": "system", 00:14:43.941 "dma_device_type": 1 00:14:43.941 }, 00:14:43.941 { 00:14:43.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.941 "dma_device_type": 2 00:14:43.941 } 00:14:43.941 ], 00:14:43.941 "driver_specific": {} 00:14:43.941 }' 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:43.941 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.199 "name": "BaseBdev2", 00:14:44.199 "aliases": [ 00:14:44.199 "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5" 00:14:44.199 ], 00:14:44.199 "product_name": "Malloc disk", 00:14:44.199 "block_size": 512, 00:14:44.199 "num_blocks": 65536, 00:14:44.199 "uuid": "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5", 00:14:44.199 "assigned_rate_limits": { 00:14:44.199 "rw_ios_per_sec": 0, 00:14:44.199 "rw_mbytes_per_sec": 0, 00:14:44.199 "r_mbytes_per_sec": 0, 00:14:44.199 "w_mbytes_per_sec": 0 00:14:44.199 }, 00:14:44.199 "claimed": true, 00:14:44.199 "claim_type": "exclusive_write", 00:14:44.199 "zoned": false, 00:14:44.199 "supported_io_types": { 00:14:44.199 "read": true, 00:14:44.199 "write": true, 00:14:44.199 "unmap": true, 00:14:44.199 "flush": true, 00:14:44.199 "reset": true, 00:14:44.199 "nvme_admin": false, 00:14:44.199 "nvme_io": false, 00:14:44.199 "nvme_io_md": false, 00:14:44.199 "write_zeroes": true, 00:14:44.199 "zcopy": true, 00:14:44.199 "get_zone_info": false, 00:14:44.199 "zone_management": false, 00:14:44.199 "zone_append": false, 00:14:44.199 "compare": false, 00:14:44.199 "compare_and_write": false, 00:14:44.199 "abort": true, 00:14:44.199 "seek_hole": false, 00:14:44.199 "seek_data": false, 00:14:44.199 "copy": true, 00:14:44.199 "nvme_iov_md": false 00:14:44.199 }, 00:14:44.199 "memory_domains": [ 00:14:44.199 { 00:14:44.199 "dma_device_id": "system", 00:14:44.199 "dma_device_type": 1 00:14:44.199 }, 00:14:44.199 { 00:14:44.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.199 "dma_device_type": 2 00:14:44.199 } 00:14:44.199 ], 00:14:44.199 "driver_specific": {} 00:14:44.199 }' 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.199 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.200 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.200 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:44.200 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.766 "name": "BaseBdev3", 00:14:44.766 "aliases": [ 00:14:44.766 "0a9a7828-42d8-11ef-9ade-d5fc5159efa5" 00:14:44.766 ], 00:14:44.766 "product_name": "Malloc disk", 00:14:44.766 "block_size": 512, 00:14:44.766 "num_blocks": 65536, 00:14:44.766 "uuid": "0a9a7828-42d8-11ef-9ade-d5fc5159efa5", 00:14:44.766 "assigned_rate_limits": { 00:14:44.766 "rw_ios_per_sec": 0, 00:14:44.766 "rw_mbytes_per_sec": 0, 00:14:44.766 "r_mbytes_per_sec": 0, 00:14:44.766 "w_mbytes_per_sec": 0 00:14:44.766 }, 00:14:44.766 "claimed": true, 00:14:44.766 "claim_type": "exclusive_write", 00:14:44.766 "zoned": false, 00:14:44.766 "supported_io_types": { 00:14:44.766 "read": true, 00:14:44.766 "write": true, 00:14:44.766 "unmap": true, 00:14:44.766 "flush": true, 00:14:44.766 "reset": true, 00:14:44.766 "nvme_admin": false, 00:14:44.766 "nvme_io": false, 00:14:44.766 "nvme_io_md": false, 00:14:44.766 "write_zeroes": true, 00:14:44.766 "zcopy": true, 00:14:44.766 "get_zone_info": false, 00:14:44.766 "zone_management": false, 00:14:44.766 "zone_append": false, 00:14:44.766 "compare": false, 00:14:44.766 "compare_and_write": false, 00:14:44.766 "abort": true, 00:14:44.766 "seek_hole": false, 00:14:44.766 "seek_data": false, 00:14:44.766 "copy": true, 00:14:44.766 "nvme_iov_md": false 00:14:44.766 }, 00:14:44.766 "memory_domains": [ 00:14:44.766 { 00:14:44.766 "dma_device_id": "system", 00:14:44.766 "dma_device_type": 1 00:14:44.766 }, 00:14:44.766 { 00:14:44.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.766 "dma_device_type": 2 00:14:44.766 } 00:14:44.766 ], 00:14:44.766 "driver_specific": {} 00:14:44.766 }' 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:44.766 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:45.025 "name": "BaseBdev4", 00:14:45.025 "aliases": [ 00:14:45.025 "0b6f7d98-42d8-11ef-9ade-d5fc5159efa5" 00:14:45.025 ], 00:14:45.025 "product_name": "Malloc disk", 00:14:45.025 "block_size": 512, 00:14:45.025 "num_blocks": 65536, 00:14:45.025 "uuid": "0b6f7d98-42d8-11ef-9ade-d5fc5159efa5", 00:14:45.025 "assigned_rate_limits": { 00:14:45.025 "rw_ios_per_sec": 0, 00:14:45.025 "rw_mbytes_per_sec": 0, 00:14:45.025 "r_mbytes_per_sec": 0, 00:14:45.025 "w_mbytes_per_sec": 0 00:14:45.025 }, 00:14:45.025 "claimed": true, 00:14:45.025 "claim_type": "exclusive_write", 00:14:45.025 "zoned": false, 00:14:45.025 "supported_io_types": { 00:14:45.025 "read": true, 00:14:45.025 "write": true, 00:14:45.025 "unmap": true, 00:14:45.025 "flush": true, 00:14:45.025 "reset": true, 00:14:45.025 "nvme_admin": false, 00:14:45.025 "nvme_io": false, 00:14:45.025 "nvme_io_md": false, 00:14:45.025 "write_zeroes": true, 00:14:45.025 "zcopy": true, 00:14:45.025 "get_zone_info": false, 00:14:45.025 "zone_management": false, 00:14:45.025 "zone_append": false, 00:14:45.025 "compare": false, 00:14:45.025 "compare_and_write": false, 00:14:45.025 "abort": true, 00:14:45.025 "seek_hole": false, 00:14:45.025 "seek_data": false, 00:14:45.025 "copy": true, 00:14:45.025 "nvme_iov_md": false 00:14:45.025 }, 00:14:45.025 "memory_domains": [ 00:14:45.025 { 00:14:45.025 "dma_device_id": "system", 00:14:45.025 "dma_device_type": 1 00:14:45.025 }, 00:14:45.025 { 00:14:45.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.025 "dma_device_type": 2 00:14:45.025 } 00:14:45.025 ], 00:14:45.025 "driver_specific": {} 00:14:45.025 }' 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:45.025 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:45.284 [2024-07-15 18:28:37.528296] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.284 [2024-07-15 18:28:37.528326] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.284 [2024-07-15 18:28:37.528341] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.284 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.543 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.543 "name": "Existed_Raid", 00:14:45.543 "uuid": "0b6f8577-42d8-11ef-9ade-d5fc5159efa5", 00:14:45.543 "strip_size_kb": 64, 00:14:45.543 "state": "offline", 00:14:45.543 "raid_level": "concat", 00:14:45.543 "superblock": false, 00:14:45.543 "num_base_bdevs": 4, 00:14:45.543 "num_base_bdevs_discovered": 3, 00:14:45.543 "num_base_bdevs_operational": 3, 00:14:45.543 "base_bdevs_list": [ 00:14:45.543 { 00:14:45.543 "name": null, 00:14:45.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.543 "is_configured": false, 00:14:45.543 "data_offset": 0, 00:14:45.543 "data_size": 65536 00:14:45.543 }, 00:14:45.543 { 00:14:45.543 "name": "BaseBdev2", 00:14:45.543 "uuid": "09bb0f4f-42d8-11ef-9ade-d5fc5159efa5", 00:14:45.543 "is_configured": true, 00:14:45.543 "data_offset": 0, 00:14:45.543 "data_size": 65536 00:14:45.543 }, 00:14:45.543 { 00:14:45.543 "name": "BaseBdev3", 00:14:45.543 "uuid": "0a9a7828-42d8-11ef-9ade-d5fc5159efa5", 00:14:45.543 "is_configured": true, 00:14:45.543 "data_offset": 0, 00:14:45.543 "data_size": 65536 00:14:45.543 }, 00:14:45.543 { 00:14:45.543 "name": "BaseBdev4", 00:14:45.543 "uuid": "0b6f7d98-42d8-11ef-9ade-d5fc5159efa5", 00:14:45.543 "is_configured": true, 00:14:45.543 "data_offset": 0, 00:14:45.543 "data_size": 65536 00:14:45.543 } 00:14:45.543 ] 00:14:45.543 }' 00:14:45.543 18:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.543 18:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.801 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:45.801 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:45.801 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.801 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:46.369 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:46.369 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.369 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:46.369 [2024-07-15 18:28:38.746303] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.627 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:46.627 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:46.627 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.627 18:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:46.885 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:46.885 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.885 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:47.154 [2024-07-15 18:28:39.326779] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:47.154 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:47.154 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:47.154 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.154 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:47.411 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:47.411 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.411 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:47.670 [2024-07-15 18:28:39.963464] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:47.670 [2024-07-15 18:28:39.963499] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32ad92a34a00 name Existed_Raid, state offline 00:14:47.670 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:47.670 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:47.670 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.671 18:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.236 BaseBdev2 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.236 18:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.494 18:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.781 [ 00:14:48.781 { 00:14:48.781 "name": "BaseBdev2", 00:14:48.781 "aliases": [ 00:14:48.781 "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5" 00:14:48.781 ], 00:14:48.781 "product_name": "Malloc disk", 00:14:48.781 "block_size": 512, 00:14:48.781 "num_blocks": 65536, 00:14:48.781 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:48.781 "assigned_rate_limits": { 00:14:48.781 "rw_ios_per_sec": 0, 00:14:48.781 "rw_mbytes_per_sec": 0, 00:14:48.781 "r_mbytes_per_sec": 0, 00:14:48.781 "w_mbytes_per_sec": 0 00:14:48.781 }, 00:14:48.781 "claimed": false, 00:14:48.781 "zoned": false, 00:14:48.781 "supported_io_types": { 00:14:48.781 "read": true, 00:14:48.781 "write": true, 00:14:48.781 "unmap": true, 00:14:48.781 "flush": true, 00:14:48.781 "reset": true, 00:14:48.781 "nvme_admin": false, 00:14:48.781 "nvme_io": false, 00:14:48.781 "nvme_io_md": false, 00:14:48.781 "write_zeroes": true, 00:14:48.781 "zcopy": true, 00:14:48.781 "get_zone_info": false, 00:14:48.781 "zone_management": false, 00:14:48.781 "zone_append": false, 00:14:48.781 "compare": false, 00:14:48.781 "compare_and_write": false, 00:14:48.781 "abort": true, 00:14:48.781 "seek_hole": false, 00:14:48.782 "seek_data": false, 00:14:48.782 "copy": true, 00:14:48.782 "nvme_iov_md": false 00:14:48.782 }, 00:14:48.782 "memory_domains": [ 00:14:48.782 { 00:14:48.782 "dma_device_id": "system", 00:14:48.782 "dma_device_type": 1 00:14:48.782 }, 00:14:48.782 { 00:14:48.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.782 "dma_device_type": 2 00:14:48.782 } 00:14:48.782 ], 00:14:48.782 "driver_specific": {} 00:14:48.782 } 00:14:48.782 ] 00:14:48.782 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:48.782 18:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:48.782 18:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:48.782 18:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:49.040 BaseBdev3 00:14:49.040 18:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:49.040 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:49.040 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.040 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:49.040 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.040 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.040 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.298 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:49.557 [ 00:14:49.557 { 00:14:49.557 "name": "BaseBdev3", 00:14:49.557 "aliases": [ 00:14:49.557 "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5" 00:14:49.557 ], 00:14:49.557 "product_name": "Malloc disk", 00:14:49.557 "block_size": 512, 00:14:49.557 "num_blocks": 65536, 00:14:49.557 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:49.557 "assigned_rate_limits": { 00:14:49.557 "rw_ios_per_sec": 0, 00:14:49.557 "rw_mbytes_per_sec": 0, 00:14:49.557 "r_mbytes_per_sec": 0, 00:14:49.557 "w_mbytes_per_sec": 0 00:14:49.557 }, 00:14:49.557 "claimed": false, 00:14:49.557 "zoned": false, 00:14:49.557 "supported_io_types": { 00:14:49.557 "read": true, 00:14:49.557 "write": true, 00:14:49.557 "unmap": true, 00:14:49.557 "flush": true, 00:14:49.557 "reset": true, 00:14:49.557 "nvme_admin": false, 00:14:49.557 "nvme_io": false, 00:14:49.557 "nvme_io_md": false, 00:14:49.557 "write_zeroes": true, 00:14:49.557 "zcopy": true, 00:14:49.557 "get_zone_info": false, 00:14:49.557 "zone_management": false, 00:14:49.557 "zone_append": false, 00:14:49.557 "compare": false, 00:14:49.557 "compare_and_write": false, 00:14:49.557 "abort": true, 00:14:49.557 "seek_hole": false, 00:14:49.557 "seek_data": false, 00:14:49.557 "copy": true, 00:14:49.557 "nvme_iov_md": false 00:14:49.557 }, 00:14:49.557 "memory_domains": [ 00:14:49.557 { 00:14:49.557 "dma_device_id": "system", 00:14:49.557 "dma_device_type": 1 00:14:49.557 }, 00:14:49.557 { 00:14:49.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.557 "dma_device_type": 2 00:14:49.557 } 00:14:49.557 ], 00:14:49.557 "driver_specific": {} 00:14:49.557 } 00:14:49.557 ] 00:14:49.557 18:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:49.557 18:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:49.557 18:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:49.557 18:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:49.815 BaseBdev4 00:14:49.815 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:49.815 18:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:49.815 18:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.815 18:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:49.815 18:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.815 18:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.815 18:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.074 18:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:50.333 [ 00:14:50.333 { 00:14:50.333 "name": "BaseBdev4", 00:14:50.333 "aliases": [ 00:14:50.333 "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5" 00:14:50.333 ], 00:14:50.333 "product_name": "Malloc disk", 00:14:50.333 "block_size": 512, 00:14:50.333 "num_blocks": 65536, 00:14:50.333 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:50.333 "assigned_rate_limits": { 00:14:50.333 "rw_ios_per_sec": 0, 00:14:50.333 "rw_mbytes_per_sec": 0, 00:14:50.333 "r_mbytes_per_sec": 0, 00:14:50.333 "w_mbytes_per_sec": 0 00:14:50.333 }, 00:14:50.333 "claimed": false, 00:14:50.333 "zoned": false, 00:14:50.333 "supported_io_types": { 00:14:50.333 "read": true, 00:14:50.333 "write": true, 00:14:50.333 "unmap": true, 00:14:50.333 "flush": true, 00:14:50.333 "reset": true, 00:14:50.333 "nvme_admin": false, 00:14:50.333 "nvme_io": false, 00:14:50.333 "nvme_io_md": false, 00:14:50.333 "write_zeroes": true, 00:14:50.333 "zcopy": true, 00:14:50.333 "get_zone_info": false, 00:14:50.333 "zone_management": false, 00:14:50.333 "zone_append": false, 00:14:50.333 "compare": false, 00:14:50.333 "compare_and_write": false, 00:14:50.333 "abort": true, 00:14:50.333 "seek_hole": false, 00:14:50.333 "seek_data": false, 00:14:50.333 "copy": true, 00:14:50.333 "nvme_iov_md": false 00:14:50.333 }, 00:14:50.333 "memory_domains": [ 00:14:50.333 { 00:14:50.333 "dma_device_id": "system", 00:14:50.333 "dma_device_type": 1 00:14:50.333 }, 00:14:50.333 { 00:14:50.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.333 "dma_device_type": 2 00:14:50.333 } 00:14:50.333 ], 00:14:50.333 "driver_specific": {} 00:14:50.333 } 00:14:50.333 ] 00:14:50.333 18:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:50.333 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:50.333 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:50.333 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:50.592 [2024-07-15 18:28:42.841615] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.592 [2024-07-15 18:28:42.841693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.592 [2024-07-15 18:28:42.841717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.592 [2024-07-15 18:28:42.842328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.592 [2024-07-15 18:28:42.842346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.592 18:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.851 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.851 "name": "Existed_Raid", 00:14:50.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.851 "strip_size_kb": 64, 00:14:50.851 "state": "configuring", 00:14:50.851 "raid_level": "concat", 00:14:50.851 "superblock": false, 00:14:50.851 "num_base_bdevs": 4, 00:14:50.851 "num_base_bdevs_discovered": 3, 00:14:50.851 "num_base_bdevs_operational": 4, 00:14:50.851 "base_bdevs_list": [ 00:14:50.851 { 00:14:50.851 "name": "BaseBdev1", 00:14:50.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.851 "is_configured": false, 00:14:50.851 "data_offset": 0, 00:14:50.851 "data_size": 0 00:14:50.851 }, 00:14:50.851 { 00:14:50.851 "name": "BaseBdev2", 00:14:50.851 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:50.851 "is_configured": true, 00:14:50.851 "data_offset": 0, 00:14:50.851 "data_size": 65536 00:14:50.851 }, 00:14:50.851 { 00:14:50.851 "name": "BaseBdev3", 00:14:50.851 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:50.851 "is_configured": true, 00:14:50.851 "data_offset": 0, 00:14:50.851 "data_size": 65536 00:14:50.851 }, 00:14:50.851 { 00:14:50.851 "name": "BaseBdev4", 00:14:50.851 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:50.851 "is_configured": true, 00:14:50.851 "data_offset": 0, 00:14:50.851 "data_size": 65536 00:14:50.851 } 00:14:50.851 ] 00:14:50.851 }' 00:14:50.851 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.851 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.114 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:51.371 [2024-07-15 18:28:43.705703] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.371 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.372 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.372 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.651 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.651 "name": "Existed_Raid", 00:14:51.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.651 "strip_size_kb": 64, 00:14:51.651 "state": "configuring", 00:14:51.651 "raid_level": "concat", 00:14:51.651 "superblock": false, 00:14:51.651 "num_base_bdevs": 4, 00:14:51.651 "num_base_bdevs_discovered": 2, 00:14:51.651 "num_base_bdevs_operational": 4, 00:14:51.651 "base_bdevs_list": [ 00:14:51.651 { 00:14:51.651 "name": "BaseBdev1", 00:14:51.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.651 "is_configured": false, 00:14:51.651 "data_offset": 0, 00:14:51.651 "data_size": 0 00:14:51.651 }, 00:14:51.651 { 00:14:51.651 "name": null, 00:14:51.651 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:51.651 "is_configured": false, 00:14:51.651 "data_offset": 0, 00:14:51.651 "data_size": 65536 00:14:51.651 }, 00:14:51.651 { 00:14:51.651 "name": "BaseBdev3", 00:14:51.651 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:51.651 "is_configured": true, 00:14:51.651 "data_offset": 0, 00:14:51.651 "data_size": 65536 00:14:51.651 }, 00:14:51.651 { 00:14:51.651 "name": "BaseBdev4", 00:14:51.651 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:51.651 "is_configured": true, 00:14:51.651 "data_offset": 0, 00:14:51.651 "data_size": 65536 00:14:51.651 } 00:14:51.651 ] 00:14:51.651 }' 00:14:51.651 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.651 18:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.229 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.229 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.488 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:52.488 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.747 [2024-07-15 18:28:44.941959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.747 BaseBdev1 00:14:52.747 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:52.747 18:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:52.747 18:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:52.747 18:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:52.747 18:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:52.747 18:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:52.747 18:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.005 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:53.264 [ 00:14:53.264 { 00:14:53.264 "name": "BaseBdev1", 00:14:53.264 "aliases": [ 00:14:53.264 "11b80c3f-42d8-11ef-9ade-d5fc5159efa5" 00:14:53.264 ], 00:14:53.264 "product_name": "Malloc disk", 00:14:53.264 "block_size": 512, 00:14:53.264 "num_blocks": 65536, 00:14:53.264 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:14:53.264 "assigned_rate_limits": { 00:14:53.264 "rw_ios_per_sec": 0, 00:14:53.264 "rw_mbytes_per_sec": 0, 00:14:53.264 "r_mbytes_per_sec": 0, 00:14:53.264 "w_mbytes_per_sec": 0 00:14:53.264 }, 00:14:53.264 "claimed": true, 00:14:53.264 "claim_type": "exclusive_write", 00:14:53.264 "zoned": false, 00:14:53.264 "supported_io_types": { 00:14:53.264 "read": true, 00:14:53.264 "write": true, 00:14:53.264 "unmap": true, 00:14:53.264 "flush": true, 00:14:53.264 "reset": true, 00:14:53.264 "nvme_admin": false, 00:14:53.264 "nvme_io": false, 00:14:53.264 "nvme_io_md": false, 00:14:53.264 "write_zeroes": true, 00:14:53.264 "zcopy": true, 00:14:53.264 "get_zone_info": false, 00:14:53.264 "zone_management": false, 00:14:53.264 "zone_append": false, 00:14:53.264 "compare": false, 00:14:53.264 "compare_and_write": false, 00:14:53.264 "abort": true, 00:14:53.264 "seek_hole": false, 00:14:53.264 "seek_data": false, 00:14:53.264 "copy": true, 00:14:53.264 "nvme_iov_md": false 00:14:53.264 }, 00:14:53.264 "memory_domains": [ 00:14:53.264 { 00:14:53.264 "dma_device_id": "system", 00:14:53.264 "dma_device_type": 1 00:14:53.264 }, 00:14:53.264 { 00:14:53.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.264 "dma_device_type": 2 00:14:53.264 } 00:14:53.264 ], 00:14:53.264 "driver_specific": {} 00:14:53.264 } 00:14:53.264 ] 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.264 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.523 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.523 "name": "Existed_Raid", 00:14:53.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.523 "strip_size_kb": 64, 00:14:53.523 "state": "configuring", 00:14:53.523 "raid_level": "concat", 00:14:53.523 "superblock": false, 00:14:53.523 "num_base_bdevs": 4, 00:14:53.523 "num_base_bdevs_discovered": 3, 00:14:53.523 "num_base_bdevs_operational": 4, 00:14:53.523 "base_bdevs_list": [ 00:14:53.523 { 00:14:53.523 "name": "BaseBdev1", 00:14:53.523 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": null, 00:14:53.523 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:53.523 "is_configured": false, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": "BaseBdev3", 00:14:53.523 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": "BaseBdev4", 00:14:53.523 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 } 00:14:53.523 ] 00:14:53.523 }' 00:14:53.523 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.523 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.781 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.781 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:54.039 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:54.039 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:54.297 [2024-07-15 18:28:46.613953] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.297 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.556 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.556 "name": "Existed_Raid", 00:14:54.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.556 "strip_size_kb": 64, 00:14:54.556 "state": "configuring", 00:14:54.556 "raid_level": "concat", 00:14:54.556 "superblock": false, 00:14:54.556 "num_base_bdevs": 4, 00:14:54.556 "num_base_bdevs_discovered": 2, 00:14:54.556 "num_base_bdevs_operational": 4, 00:14:54.556 "base_bdevs_list": [ 00:14:54.556 { 00:14:54.556 "name": "BaseBdev1", 00:14:54.556 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:14:54.556 "is_configured": true, 00:14:54.556 "data_offset": 0, 00:14:54.556 "data_size": 65536 00:14:54.556 }, 00:14:54.556 { 00:14:54.556 "name": null, 00:14:54.556 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:54.556 "is_configured": false, 00:14:54.556 "data_offset": 0, 00:14:54.556 "data_size": 65536 00:14:54.556 }, 00:14:54.556 { 00:14:54.556 "name": null, 00:14:54.556 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:54.556 "is_configured": false, 00:14:54.556 "data_offset": 0, 00:14:54.556 "data_size": 65536 00:14:54.556 }, 00:14:54.556 { 00:14:54.556 "name": "BaseBdev4", 00:14:54.556 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:54.556 "is_configured": true, 00:14:54.556 "data_offset": 0, 00:14:54.556 "data_size": 65536 00:14:54.556 } 00:14:54.556 ] 00:14:54.556 }' 00:14:54.556 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.556 18:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.122 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.122 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.122 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:55.122 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:55.381 [2024-07-15 18:28:47.726052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.381 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.639 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.639 "name": "Existed_Raid", 00:14:55.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.639 "strip_size_kb": 64, 00:14:55.639 "state": "configuring", 00:14:55.639 "raid_level": "concat", 00:14:55.639 "superblock": false, 00:14:55.639 "num_base_bdevs": 4, 00:14:55.639 "num_base_bdevs_discovered": 3, 00:14:55.639 "num_base_bdevs_operational": 4, 00:14:55.639 "base_bdevs_list": [ 00:14:55.639 { 00:14:55.639 "name": "BaseBdev1", 00:14:55.639 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:14:55.639 "is_configured": true, 00:14:55.639 "data_offset": 0, 00:14:55.639 "data_size": 65536 00:14:55.639 }, 00:14:55.639 { 00:14:55.639 "name": null, 00:14:55.639 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:55.640 "is_configured": false, 00:14:55.640 "data_offset": 0, 00:14:55.640 "data_size": 65536 00:14:55.640 }, 00:14:55.640 { 00:14:55.640 "name": "BaseBdev3", 00:14:55.640 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:55.640 "is_configured": true, 00:14:55.640 "data_offset": 0, 00:14:55.640 "data_size": 65536 00:14:55.640 }, 00:14:55.640 { 00:14:55.640 "name": "BaseBdev4", 00:14:55.640 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:55.640 "is_configured": true, 00:14:55.640 "data_offset": 0, 00:14:55.640 "data_size": 65536 00:14:55.640 } 00:14:55.640 ] 00:14:55.640 }' 00:14:55.640 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.640 18:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.205 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.205 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:56.205 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:56.205 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:56.463 [2024-07-15 18:28:48.826151] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.463 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.030 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.030 "name": "Existed_Raid", 00:14:57.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.030 "strip_size_kb": 64, 00:14:57.030 "state": "configuring", 00:14:57.030 "raid_level": "concat", 00:14:57.030 "superblock": false, 00:14:57.030 "num_base_bdevs": 4, 00:14:57.030 "num_base_bdevs_discovered": 2, 00:14:57.030 "num_base_bdevs_operational": 4, 00:14:57.030 "base_bdevs_list": [ 00:14:57.030 { 00:14:57.030 "name": null, 00:14:57.030 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:14:57.030 "is_configured": false, 00:14:57.030 "data_offset": 0, 00:14:57.030 "data_size": 65536 00:14:57.030 }, 00:14:57.030 { 00:14:57.030 "name": null, 00:14:57.030 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:57.030 "is_configured": false, 00:14:57.030 "data_offset": 0, 00:14:57.030 "data_size": 65536 00:14:57.030 }, 00:14:57.030 { 00:14:57.030 "name": "BaseBdev3", 00:14:57.030 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:57.030 "is_configured": true, 00:14:57.030 "data_offset": 0, 00:14:57.030 "data_size": 65536 00:14:57.030 }, 00:14:57.030 { 00:14:57.030 "name": "BaseBdev4", 00:14:57.030 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:57.030 "is_configured": true, 00:14:57.030 "data_offset": 0, 00:14:57.030 "data_size": 65536 00:14:57.030 } 00:14:57.030 ] 00:14:57.030 }' 00:14:57.030 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.030 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.289 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.289 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.547 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:57.547 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:57.819 [2024-07-15 18:28:50.052281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.819 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.083 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.083 "name": "Existed_Raid", 00:14:58.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.083 "strip_size_kb": 64, 00:14:58.083 "state": "configuring", 00:14:58.083 "raid_level": "concat", 00:14:58.083 "superblock": false, 00:14:58.083 "num_base_bdevs": 4, 00:14:58.083 "num_base_bdevs_discovered": 3, 00:14:58.083 "num_base_bdevs_operational": 4, 00:14:58.083 "base_bdevs_list": [ 00:14:58.083 { 00:14:58.083 "name": null, 00:14:58.083 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:14:58.083 "is_configured": false, 00:14:58.083 "data_offset": 0, 00:14:58.083 "data_size": 65536 00:14:58.083 }, 00:14:58.083 { 00:14:58.083 "name": "BaseBdev2", 00:14:58.083 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:58.083 "is_configured": true, 00:14:58.083 "data_offset": 0, 00:14:58.083 "data_size": 65536 00:14:58.083 }, 00:14:58.083 { 00:14:58.083 "name": "BaseBdev3", 00:14:58.083 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:58.083 "is_configured": true, 00:14:58.083 "data_offset": 0, 00:14:58.083 "data_size": 65536 00:14:58.083 }, 00:14:58.083 { 00:14:58.083 "name": "BaseBdev4", 00:14:58.083 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:58.083 "is_configured": true, 00:14:58.083 "data_offset": 0, 00:14:58.083 "data_size": 65536 00:14:58.083 } 00:14:58.083 ] 00:14:58.083 }' 00:14:58.083 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.083 18:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.649 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.649 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.649 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:58.649 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.649 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.908 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 11b80c3f-42d8-11ef-9ade-d5fc5159efa5 00:14:59.167 [2024-07-15 18:28:51.500584] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:59.167 [2024-07-15 18:28:51.500623] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x32ad92a34f00 00:14:59.167 [2024-07-15 18:28:51.500627] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:59.167 [2024-07-15 18:28:51.500654] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x32ad92a97e20 00:14:59.167 [2024-07-15 18:28:51.500780] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x32ad92a34f00 00:14:59.167 [2024-07-15 18:28:51.500787] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x32ad92a34f00 00:14:59.167 [2024-07-15 18:28:51.500836] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.167 NewBaseBdev 00:14:59.167 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:59.167 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:59.167 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:59.167 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:59.167 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:59.168 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:59.168 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:59.426 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:59.683 [ 00:14:59.684 { 00:14:59.684 "name": "NewBaseBdev", 00:14:59.684 "aliases": [ 00:14:59.684 "11b80c3f-42d8-11ef-9ade-d5fc5159efa5" 00:14:59.684 ], 00:14:59.684 "product_name": "Malloc disk", 00:14:59.684 "block_size": 512, 00:14:59.684 "num_blocks": 65536, 00:14:59.684 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:14:59.684 "assigned_rate_limits": { 00:14:59.684 "rw_ios_per_sec": 0, 00:14:59.684 "rw_mbytes_per_sec": 0, 00:14:59.684 "r_mbytes_per_sec": 0, 00:14:59.684 "w_mbytes_per_sec": 0 00:14:59.684 }, 00:14:59.684 "claimed": true, 00:14:59.684 "claim_type": "exclusive_write", 00:14:59.684 "zoned": false, 00:14:59.684 "supported_io_types": { 00:14:59.684 "read": true, 00:14:59.684 "write": true, 00:14:59.684 "unmap": true, 00:14:59.684 "flush": true, 00:14:59.684 "reset": true, 00:14:59.684 "nvme_admin": false, 00:14:59.684 "nvme_io": false, 00:14:59.684 "nvme_io_md": false, 00:14:59.684 "write_zeroes": true, 00:14:59.684 "zcopy": true, 00:14:59.684 "get_zone_info": false, 00:14:59.684 "zone_management": false, 00:14:59.684 "zone_append": false, 00:14:59.684 "compare": false, 00:14:59.684 "compare_and_write": false, 00:14:59.684 "abort": true, 00:14:59.684 "seek_hole": false, 00:14:59.684 "seek_data": false, 00:14:59.684 "copy": true, 00:14:59.684 "nvme_iov_md": false 00:14:59.684 }, 00:14:59.684 "memory_domains": [ 00:14:59.684 { 00:14:59.684 "dma_device_id": "system", 00:14:59.684 "dma_device_type": 1 00:14:59.684 }, 00:14:59.684 { 00:14:59.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.684 "dma_device_type": 2 00:14:59.684 } 00:14:59.684 ], 00:14:59.684 "driver_specific": {} 00:14:59.684 } 00:14:59.684 ] 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.684 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.942 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.942 "name": "Existed_Raid", 00:14:59.942 "uuid": "15a0d790-42d8-11ef-9ade-d5fc5159efa5", 00:14:59.942 "strip_size_kb": 64, 00:14:59.942 "state": "online", 00:14:59.942 "raid_level": "concat", 00:14:59.942 "superblock": false, 00:14:59.942 "num_base_bdevs": 4, 00:14:59.942 "num_base_bdevs_discovered": 4, 00:14:59.942 "num_base_bdevs_operational": 4, 00:14:59.942 "base_bdevs_list": [ 00:14:59.942 { 00:14:59.942 "name": "NewBaseBdev", 00:14:59.942 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:14:59.942 "is_configured": true, 00:14:59.942 "data_offset": 0, 00:14:59.942 "data_size": 65536 00:14:59.942 }, 00:14:59.942 { 00:14:59.942 "name": "BaseBdev2", 00:14:59.942 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:14:59.942 "is_configured": true, 00:14:59.942 "data_offset": 0, 00:14:59.942 "data_size": 65536 00:14:59.942 }, 00:14:59.942 { 00:14:59.942 "name": "BaseBdev3", 00:14:59.942 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:14:59.942 "is_configured": true, 00:14:59.942 "data_offset": 0, 00:14:59.942 "data_size": 65536 00:14:59.942 }, 00:14:59.942 { 00:14:59.942 "name": "BaseBdev4", 00:14:59.942 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:14:59.942 "is_configured": true, 00:14:59.942 "data_offset": 0, 00:14:59.943 "data_size": 65536 00:14:59.943 } 00:14:59.943 ] 00:14:59.943 }' 00:14:59.943 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.943 18:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.200 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:00.200 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:00.200 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:00.200 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:00.200 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:00.200 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:00.200 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:00.200 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:00.459 [2024-07-15 18:28:52.828602] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.459 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:00.459 "name": "Existed_Raid", 00:15:00.459 "aliases": [ 00:15:00.459 "15a0d790-42d8-11ef-9ade-d5fc5159efa5" 00:15:00.459 ], 00:15:00.459 "product_name": "Raid Volume", 00:15:00.459 "block_size": 512, 00:15:00.459 "num_blocks": 262144, 00:15:00.459 "uuid": "15a0d790-42d8-11ef-9ade-d5fc5159efa5", 00:15:00.459 "assigned_rate_limits": { 00:15:00.459 "rw_ios_per_sec": 0, 00:15:00.459 "rw_mbytes_per_sec": 0, 00:15:00.459 "r_mbytes_per_sec": 0, 00:15:00.459 "w_mbytes_per_sec": 0 00:15:00.459 }, 00:15:00.459 "claimed": false, 00:15:00.459 "zoned": false, 00:15:00.459 "supported_io_types": { 00:15:00.459 "read": true, 00:15:00.459 "write": true, 00:15:00.459 "unmap": true, 00:15:00.459 "flush": true, 00:15:00.459 "reset": true, 00:15:00.459 "nvme_admin": false, 00:15:00.459 "nvme_io": false, 00:15:00.459 "nvme_io_md": false, 00:15:00.459 "write_zeroes": true, 00:15:00.459 "zcopy": false, 00:15:00.459 "get_zone_info": false, 00:15:00.459 "zone_management": false, 00:15:00.459 "zone_append": false, 00:15:00.459 "compare": false, 00:15:00.459 "compare_and_write": false, 00:15:00.459 "abort": false, 00:15:00.459 "seek_hole": false, 00:15:00.459 "seek_data": false, 00:15:00.459 "copy": false, 00:15:00.459 "nvme_iov_md": false 00:15:00.459 }, 00:15:00.459 "memory_domains": [ 00:15:00.459 { 00:15:00.459 "dma_device_id": "system", 00:15:00.459 "dma_device_type": 1 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.459 "dma_device_type": 2 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "dma_device_id": "system", 00:15:00.459 "dma_device_type": 1 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.459 "dma_device_type": 2 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "dma_device_id": "system", 00:15:00.459 "dma_device_type": 1 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.459 "dma_device_type": 2 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "dma_device_id": "system", 00:15:00.459 "dma_device_type": 1 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.459 "dma_device_type": 2 00:15:00.459 } 00:15:00.459 ], 00:15:00.459 "driver_specific": { 00:15:00.459 "raid": { 00:15:00.459 "uuid": "15a0d790-42d8-11ef-9ade-d5fc5159efa5", 00:15:00.459 "strip_size_kb": 64, 00:15:00.459 "state": "online", 00:15:00.459 "raid_level": "concat", 00:15:00.459 "superblock": false, 00:15:00.459 "num_base_bdevs": 4, 00:15:00.459 "num_base_bdevs_discovered": 4, 00:15:00.459 "num_base_bdevs_operational": 4, 00:15:00.459 "base_bdevs_list": [ 00:15:00.459 { 00:15:00.459 "name": "NewBaseBdev", 00:15:00.459 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:15:00.459 "is_configured": true, 00:15:00.459 "data_offset": 0, 00:15:00.459 "data_size": 65536 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "name": "BaseBdev2", 00:15:00.459 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:15:00.459 "is_configured": true, 00:15:00.459 "data_offset": 0, 00:15:00.459 "data_size": 65536 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "name": "BaseBdev3", 00:15:00.459 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:15:00.459 "is_configured": true, 00:15:00.459 "data_offset": 0, 00:15:00.459 "data_size": 65536 00:15:00.459 }, 00:15:00.459 { 00:15:00.459 "name": "BaseBdev4", 00:15:00.459 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:15:00.459 "is_configured": true, 00:15:00.459 "data_offset": 0, 00:15:00.459 "data_size": 65536 00:15:00.459 } 00:15:00.459 ] 00:15:00.459 } 00:15:00.459 } 00:15:00.459 }' 00:15:00.459 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.717 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:00.717 BaseBdev2 00:15:00.717 BaseBdev3 00:15:00.717 BaseBdev4' 00:15:00.717 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.717 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.717 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:00.717 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.717 "name": "NewBaseBdev", 00:15:00.717 "aliases": [ 00:15:00.717 "11b80c3f-42d8-11ef-9ade-d5fc5159efa5" 00:15:00.717 ], 00:15:00.717 "product_name": "Malloc disk", 00:15:00.717 "block_size": 512, 00:15:00.717 "num_blocks": 65536, 00:15:00.718 "uuid": "11b80c3f-42d8-11ef-9ade-d5fc5159efa5", 00:15:00.718 "assigned_rate_limits": { 00:15:00.718 "rw_ios_per_sec": 0, 00:15:00.718 "rw_mbytes_per_sec": 0, 00:15:00.718 "r_mbytes_per_sec": 0, 00:15:00.718 "w_mbytes_per_sec": 0 00:15:00.718 }, 00:15:00.718 "claimed": true, 00:15:00.718 "claim_type": "exclusive_write", 00:15:00.718 "zoned": false, 00:15:00.718 "supported_io_types": { 00:15:00.718 "read": true, 00:15:00.718 "write": true, 00:15:00.718 "unmap": true, 00:15:00.718 "flush": true, 00:15:00.718 "reset": true, 00:15:00.718 "nvme_admin": false, 00:15:00.718 "nvme_io": false, 00:15:00.718 "nvme_io_md": false, 00:15:00.718 "write_zeroes": true, 00:15:00.718 "zcopy": true, 00:15:00.718 "get_zone_info": false, 00:15:00.718 "zone_management": false, 00:15:00.718 "zone_append": false, 00:15:00.718 "compare": false, 00:15:00.718 "compare_and_write": false, 00:15:00.718 "abort": true, 00:15:00.718 "seek_hole": false, 00:15:00.718 "seek_data": false, 00:15:00.718 "copy": true, 00:15:00.718 "nvme_iov_md": false 00:15:00.718 }, 00:15:00.718 "memory_domains": [ 00:15:00.718 { 00:15:00.718 "dma_device_id": "system", 00:15:00.718 "dma_device_type": 1 00:15:00.718 }, 00:15:00.718 { 00:15:00.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.718 "dma_device_type": 2 00:15:00.718 } 00:15:00.718 ], 00:15:00.718 "driver_specific": {} 00:15:00.718 }' 00:15:00.718 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.718 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.718 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:00.975 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.233 "name": "BaseBdev2", 00:15:01.233 "aliases": [ 00:15:01.233 "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5" 00:15:01.233 ], 00:15:01.233 "product_name": "Malloc disk", 00:15:01.233 "block_size": 512, 00:15:01.233 "num_blocks": 65536, 00:15:01.233 "uuid": "0f1cdda6-42d8-11ef-9ade-d5fc5159efa5", 00:15:01.233 "assigned_rate_limits": { 00:15:01.233 "rw_ios_per_sec": 0, 00:15:01.233 "rw_mbytes_per_sec": 0, 00:15:01.233 "r_mbytes_per_sec": 0, 00:15:01.233 "w_mbytes_per_sec": 0 00:15:01.233 }, 00:15:01.233 "claimed": true, 00:15:01.233 "claim_type": "exclusive_write", 00:15:01.233 "zoned": false, 00:15:01.233 "supported_io_types": { 00:15:01.233 "read": true, 00:15:01.233 "write": true, 00:15:01.233 "unmap": true, 00:15:01.233 "flush": true, 00:15:01.233 "reset": true, 00:15:01.233 "nvme_admin": false, 00:15:01.233 "nvme_io": false, 00:15:01.233 "nvme_io_md": false, 00:15:01.233 "write_zeroes": true, 00:15:01.233 "zcopy": true, 00:15:01.233 "get_zone_info": false, 00:15:01.233 "zone_management": false, 00:15:01.233 "zone_append": false, 00:15:01.233 "compare": false, 00:15:01.233 "compare_and_write": false, 00:15:01.233 "abort": true, 00:15:01.233 "seek_hole": false, 00:15:01.233 "seek_data": false, 00:15:01.233 "copy": true, 00:15:01.233 "nvme_iov_md": false 00:15:01.233 }, 00:15:01.233 "memory_domains": [ 00:15:01.233 { 00:15:01.233 "dma_device_id": "system", 00:15:01.233 "dma_device_type": 1 00:15:01.233 }, 00:15:01.233 { 00:15:01.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.233 "dma_device_type": 2 00:15:01.233 } 00:15:01.233 ], 00:15:01.233 "driver_specific": {} 00:15:01.233 }' 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.233 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.490 "name": "BaseBdev3", 00:15:01.490 "aliases": [ 00:15:01.490 "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5" 00:15:01.490 ], 00:15:01.490 "product_name": "Malloc disk", 00:15:01.490 "block_size": 512, 00:15:01.490 "num_blocks": 65536, 00:15:01.490 "uuid": "0f8e66a1-42d8-11ef-9ade-d5fc5159efa5", 00:15:01.490 "assigned_rate_limits": { 00:15:01.490 "rw_ios_per_sec": 0, 00:15:01.490 "rw_mbytes_per_sec": 0, 00:15:01.490 "r_mbytes_per_sec": 0, 00:15:01.490 "w_mbytes_per_sec": 0 00:15:01.490 }, 00:15:01.490 "claimed": true, 00:15:01.490 "claim_type": "exclusive_write", 00:15:01.490 "zoned": false, 00:15:01.490 "supported_io_types": { 00:15:01.490 "read": true, 00:15:01.490 "write": true, 00:15:01.490 "unmap": true, 00:15:01.490 "flush": true, 00:15:01.490 "reset": true, 00:15:01.490 "nvme_admin": false, 00:15:01.490 "nvme_io": false, 00:15:01.490 "nvme_io_md": false, 00:15:01.490 "write_zeroes": true, 00:15:01.490 "zcopy": true, 00:15:01.490 "get_zone_info": false, 00:15:01.490 "zone_management": false, 00:15:01.490 "zone_append": false, 00:15:01.490 "compare": false, 00:15:01.490 "compare_and_write": false, 00:15:01.490 "abort": true, 00:15:01.490 "seek_hole": false, 00:15:01.490 "seek_data": false, 00:15:01.490 "copy": true, 00:15:01.490 "nvme_iov_md": false 00:15:01.490 }, 00:15:01.490 "memory_domains": [ 00:15:01.490 { 00:15:01.490 "dma_device_id": "system", 00:15:01.490 "dma_device_type": 1 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.490 "dma_device_type": 2 00:15:01.490 } 00:15:01.490 ], 00:15:01.490 "driver_specific": {} 00:15:01.490 }' 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:01.490 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.747 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.747 "name": "BaseBdev4", 00:15:01.747 "aliases": [ 00:15:01.747 "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5" 00:15:01.747 ], 00:15:01.747 "product_name": "Malloc disk", 00:15:01.747 "block_size": 512, 00:15:01.747 "num_blocks": 65536, 00:15:01.747 "uuid": "0ffce1e7-42d8-11ef-9ade-d5fc5159efa5", 00:15:01.747 "assigned_rate_limits": { 00:15:01.747 "rw_ios_per_sec": 0, 00:15:01.747 "rw_mbytes_per_sec": 0, 00:15:01.747 "r_mbytes_per_sec": 0, 00:15:01.747 "w_mbytes_per_sec": 0 00:15:01.747 }, 00:15:01.747 "claimed": true, 00:15:01.747 "claim_type": "exclusive_write", 00:15:01.747 "zoned": false, 00:15:01.747 "supported_io_types": { 00:15:01.747 "read": true, 00:15:01.747 "write": true, 00:15:01.747 "unmap": true, 00:15:01.747 "flush": true, 00:15:01.747 "reset": true, 00:15:01.747 "nvme_admin": false, 00:15:01.747 "nvme_io": false, 00:15:01.747 "nvme_io_md": false, 00:15:01.747 "write_zeroes": true, 00:15:01.747 "zcopy": true, 00:15:01.747 "get_zone_info": false, 00:15:01.747 "zone_management": false, 00:15:01.747 "zone_append": false, 00:15:01.747 "compare": false, 00:15:01.747 "compare_and_write": false, 00:15:01.747 "abort": true, 00:15:01.747 "seek_hole": false, 00:15:01.747 "seek_data": false, 00:15:01.747 "copy": true, 00:15:01.747 "nvme_iov_md": false 00:15:01.747 }, 00:15:01.747 "memory_domains": [ 00:15:01.747 { 00:15:01.747 "dma_device_id": "system", 00:15:01.747 "dma_device_type": 1 00:15:01.747 }, 00:15:01.747 { 00:15:01.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.747 "dma_device_type": 2 00:15:01.747 } 00:15:01.747 ], 00:15:01.747 "driver_specific": {} 00:15:01.747 }' 00:15:01.747 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.747 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.747 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.748 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.006 [2024-07-15 18:28:54.332737] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.006 [2024-07-15 18:28:54.332778] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.006 [2024-07-15 18:28:54.332811] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.006 [2024-07-15 18:28:54.332831] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.006 [2024-07-15 18:28:54.332836] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32ad92a34f00 name Existed_Raid, state offline 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 60739 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 60739 ']' 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 60739 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 60739 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:02.006 killing process with pid 60739 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60739' 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 60739 00:15:02.006 [2024-07-15 18:28:54.363138] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.006 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 60739 00:15:02.006 [2024-07-15 18:28:54.396055] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:02.264 00:15:02.264 real 0m28.280s 00:15:02.264 user 0m51.358s 00:15:02.264 sys 0m4.334s 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.264 ************************************ 00:15:02.264 END TEST raid_state_function_test 00:15:02.264 ************************************ 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.264 18:28:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:02.264 18:28:54 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:02.264 18:28:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:02.264 18:28:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.264 18:28:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.264 ************************************ 00:15:02.264 START TEST raid_state_function_test_sb 00:15:02.264 ************************************ 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=61558 00:15:02.264 Process raid pid: 61558 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61558' 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 61558 /var/tmp/spdk-raid.sock 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 61558 ']' 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.264 18:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.264 [2024-07-15 18:28:54.654468] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:02.523 [2024-07-15 18:28:54.654641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:03.090 EAL: TSC is not safe to use in SMP mode 00:15:03.090 EAL: TSC is not invariant 00:15:03.090 [2024-07-15 18:28:55.256446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.090 [2024-07-15 18:28:55.375725] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:03.090 [2024-07-15 18:28:55.378355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.090 [2024-07-15 18:28:55.379310] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.090 [2024-07-15 18:28:55.379327] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.679 18:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.679 18:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:03.679 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:03.679 [2024-07-15 18:28:55.973018] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.679 [2024-07-15 18:28:55.973076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.679 [2024-07-15 18:28:55.973082] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.679 [2024-07-15 18:28:55.973091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.679 [2024-07-15 18:28:55.973095] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:03.679 [2024-07-15 18:28:55.973103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:03.679 [2024-07-15 18:28:55.973106] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:03.679 [2024-07-15 18:28:55.973113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:03.679 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:03.679 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.680 18:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.938 18:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.938 "name": "Existed_Raid", 00:15:03.938 "uuid": "184b45ac-42d8-11ef-9ade-d5fc5159efa5", 00:15:03.938 "strip_size_kb": 64, 00:15:03.938 "state": "configuring", 00:15:03.938 "raid_level": "concat", 00:15:03.938 "superblock": true, 00:15:03.938 "num_base_bdevs": 4, 00:15:03.938 "num_base_bdevs_discovered": 0, 00:15:03.938 "num_base_bdevs_operational": 4, 00:15:03.938 "base_bdevs_list": [ 00:15:03.938 { 00:15:03.938 "name": "BaseBdev1", 00:15:03.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.938 "is_configured": false, 00:15:03.938 "data_offset": 0, 00:15:03.938 "data_size": 0 00:15:03.938 }, 00:15:03.938 { 00:15:03.938 "name": "BaseBdev2", 00:15:03.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.938 "is_configured": false, 00:15:03.938 "data_offset": 0, 00:15:03.938 "data_size": 0 00:15:03.938 }, 00:15:03.938 { 00:15:03.938 "name": "BaseBdev3", 00:15:03.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.938 "is_configured": false, 00:15:03.938 "data_offset": 0, 00:15:03.938 "data_size": 0 00:15:03.938 }, 00:15:03.938 { 00:15:03.938 "name": "BaseBdev4", 00:15:03.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.938 "is_configured": false, 00:15:03.938 "data_offset": 0, 00:15:03.938 "data_size": 0 00:15:03.938 } 00:15:03.938 ] 00:15:03.938 }' 00:15:03.938 18:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.938 18:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.196 18:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.454 [2024-07-15 18:28:56.761036] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.454 [2024-07-15 18:28:56.761064] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x40c8234500 name Existed_Raid, state configuring 00:15:04.454 18:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:04.712 [2024-07-15 18:28:57.041078] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.712 [2024-07-15 18:28:57.041131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.712 [2024-07-15 18:28:57.041137] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.712 [2024-07-15 18:28:57.041146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.712 [2024-07-15 18:28:57.041150] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.712 [2024-07-15 18:28:57.041158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.712 [2024-07-15 18:28:57.041161] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:04.712 [2024-07-15 18:28:57.041169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:04.712 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:04.970 [2024-07-15 18:28:57.326106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.970 BaseBdev1 00:15:04.970 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:04.970 18:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:04.970 18:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:04.970 18:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:04.970 18:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:04.970 18:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:04.970 18:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.229 18:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.488 [ 00:15:05.488 { 00:15:05.488 "name": "BaseBdev1", 00:15:05.488 "aliases": [ 00:15:05.488 "19199580-42d8-11ef-9ade-d5fc5159efa5" 00:15:05.488 ], 00:15:05.488 "product_name": "Malloc disk", 00:15:05.488 "block_size": 512, 00:15:05.488 "num_blocks": 65536, 00:15:05.488 "uuid": "19199580-42d8-11ef-9ade-d5fc5159efa5", 00:15:05.488 "assigned_rate_limits": { 00:15:05.488 "rw_ios_per_sec": 0, 00:15:05.488 "rw_mbytes_per_sec": 0, 00:15:05.488 "r_mbytes_per_sec": 0, 00:15:05.488 "w_mbytes_per_sec": 0 00:15:05.488 }, 00:15:05.488 "claimed": true, 00:15:05.488 "claim_type": "exclusive_write", 00:15:05.488 "zoned": false, 00:15:05.488 "supported_io_types": { 00:15:05.488 "read": true, 00:15:05.488 "write": true, 00:15:05.488 "unmap": true, 00:15:05.488 "flush": true, 00:15:05.488 "reset": true, 00:15:05.488 "nvme_admin": false, 00:15:05.488 "nvme_io": false, 00:15:05.488 "nvme_io_md": false, 00:15:05.488 "write_zeroes": true, 00:15:05.488 "zcopy": true, 00:15:05.488 "get_zone_info": false, 00:15:05.488 "zone_management": false, 00:15:05.488 "zone_append": false, 00:15:05.488 "compare": false, 00:15:05.488 "compare_and_write": false, 00:15:05.488 "abort": true, 00:15:05.488 "seek_hole": false, 00:15:05.488 "seek_data": false, 00:15:05.488 "copy": true, 00:15:05.488 "nvme_iov_md": false 00:15:05.488 }, 00:15:05.488 "memory_domains": [ 00:15:05.488 { 00:15:05.488 "dma_device_id": "system", 00:15:05.488 "dma_device_type": 1 00:15:05.488 }, 00:15:05.488 { 00:15:05.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.488 "dma_device_type": 2 00:15:05.488 } 00:15:05.488 ], 00:15:05.488 "driver_specific": {} 00:15:05.488 } 00:15:05.488 ] 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.488 18:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.055 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.055 "name": "Existed_Raid", 00:15:06.055 "uuid": "18ee3ed2-42d8-11ef-9ade-d5fc5159efa5", 00:15:06.055 "strip_size_kb": 64, 00:15:06.055 "state": "configuring", 00:15:06.055 "raid_level": "concat", 00:15:06.055 "superblock": true, 00:15:06.055 "num_base_bdevs": 4, 00:15:06.055 "num_base_bdevs_discovered": 1, 00:15:06.055 "num_base_bdevs_operational": 4, 00:15:06.055 "base_bdevs_list": [ 00:15:06.055 { 00:15:06.055 "name": "BaseBdev1", 00:15:06.055 "uuid": "19199580-42d8-11ef-9ade-d5fc5159efa5", 00:15:06.055 "is_configured": true, 00:15:06.055 "data_offset": 2048, 00:15:06.055 "data_size": 63488 00:15:06.055 }, 00:15:06.055 { 00:15:06.055 "name": "BaseBdev2", 00:15:06.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.055 "is_configured": false, 00:15:06.055 "data_offset": 0, 00:15:06.055 "data_size": 0 00:15:06.055 }, 00:15:06.055 { 00:15:06.055 "name": "BaseBdev3", 00:15:06.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.055 "is_configured": false, 00:15:06.055 "data_offset": 0, 00:15:06.055 "data_size": 0 00:15:06.055 }, 00:15:06.055 { 00:15:06.055 "name": "BaseBdev4", 00:15:06.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.055 "is_configured": false, 00:15:06.055 "data_offset": 0, 00:15:06.055 "data_size": 0 00:15:06.055 } 00:15:06.055 ] 00:15:06.055 }' 00:15:06.055 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.055 18:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.313 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:06.313 [2024-07-15 18:28:58.685170] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.313 [2024-07-15 18:28:58.685208] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x40c8234500 name Existed_Raid, state configuring 00:15:06.313 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:06.571 [2024-07-15 18:28:58.925206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.571 [2024-07-15 18:28:58.926009] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.571 [2024-07-15 18:28:58.926047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.571 [2024-07-15 18:28:58.926052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.571 [2024-07-15 18:28:58.926061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.571 [2024-07-15 18:28:58.926065] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:06.571 [2024-07-15 18:28:58.926073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.571 18:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.136 18:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.136 "name": "Existed_Raid", 00:15:07.136 "uuid": "1a0dbd7c-42d8-11ef-9ade-d5fc5159efa5", 00:15:07.136 "strip_size_kb": 64, 00:15:07.136 "state": "configuring", 00:15:07.136 "raid_level": "concat", 00:15:07.136 "superblock": true, 00:15:07.136 "num_base_bdevs": 4, 00:15:07.136 "num_base_bdevs_discovered": 1, 00:15:07.136 "num_base_bdevs_operational": 4, 00:15:07.136 "base_bdevs_list": [ 00:15:07.136 { 00:15:07.136 "name": "BaseBdev1", 00:15:07.136 "uuid": "19199580-42d8-11ef-9ade-d5fc5159efa5", 00:15:07.136 "is_configured": true, 00:15:07.136 "data_offset": 2048, 00:15:07.136 "data_size": 63488 00:15:07.136 }, 00:15:07.136 { 00:15:07.136 "name": "BaseBdev2", 00:15:07.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.136 "is_configured": false, 00:15:07.136 "data_offset": 0, 00:15:07.136 "data_size": 0 00:15:07.136 }, 00:15:07.136 { 00:15:07.136 "name": "BaseBdev3", 00:15:07.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.136 "is_configured": false, 00:15:07.136 "data_offset": 0, 00:15:07.136 "data_size": 0 00:15:07.136 }, 00:15:07.136 { 00:15:07.136 "name": "BaseBdev4", 00:15:07.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.136 "is_configured": false, 00:15:07.136 "data_offset": 0, 00:15:07.136 "data_size": 0 00:15:07.136 } 00:15:07.136 ] 00:15:07.136 }' 00:15:07.136 18:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.136 18:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 18:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:07.394 [2024-07-15 18:28:59.769401] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.394 BaseBdev2 00:15:07.652 18:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:07.652 18:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:07.652 18:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:07.652 18:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:07.652 18:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:07.652 18:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:07.652 18:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:07.652 18:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:07.910 [ 00:15:07.910 { 00:15:07.910 "name": "BaseBdev2", 00:15:07.910 "aliases": [ 00:15:07.910 "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5" 00:15:07.910 ], 00:15:07.910 "product_name": "Malloc disk", 00:15:07.910 "block_size": 512, 00:15:07.910 "num_blocks": 65536, 00:15:07.910 "uuid": "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5", 00:15:07.910 "assigned_rate_limits": { 00:15:07.910 "rw_ios_per_sec": 0, 00:15:07.910 "rw_mbytes_per_sec": 0, 00:15:07.910 "r_mbytes_per_sec": 0, 00:15:07.910 "w_mbytes_per_sec": 0 00:15:07.910 }, 00:15:07.910 "claimed": true, 00:15:07.910 "claim_type": "exclusive_write", 00:15:07.910 "zoned": false, 00:15:07.910 "supported_io_types": { 00:15:07.910 "read": true, 00:15:07.910 "write": true, 00:15:07.910 "unmap": true, 00:15:07.910 "flush": true, 00:15:07.910 "reset": true, 00:15:07.910 "nvme_admin": false, 00:15:07.910 "nvme_io": false, 00:15:07.910 "nvme_io_md": false, 00:15:07.910 "write_zeroes": true, 00:15:07.910 "zcopy": true, 00:15:07.910 "get_zone_info": false, 00:15:07.910 "zone_management": false, 00:15:07.910 "zone_append": false, 00:15:07.910 "compare": false, 00:15:07.910 "compare_and_write": false, 00:15:07.910 "abort": true, 00:15:07.910 "seek_hole": false, 00:15:07.910 "seek_data": false, 00:15:07.910 "copy": true, 00:15:07.910 "nvme_iov_md": false 00:15:07.910 }, 00:15:07.910 "memory_domains": [ 00:15:07.910 { 00:15:07.910 "dma_device_id": "system", 00:15:07.910 "dma_device_type": 1 00:15:07.910 }, 00:15:07.910 { 00:15:07.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.910 "dma_device_type": 2 00:15:07.910 } 00:15:07.910 ], 00:15:07.910 "driver_specific": {} 00:15:07.910 } 00:15:07.910 ] 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.910 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.475 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.475 "name": "Existed_Raid", 00:15:08.475 "uuid": "1a0dbd7c-42d8-11ef-9ade-d5fc5159efa5", 00:15:08.475 "strip_size_kb": 64, 00:15:08.475 "state": "configuring", 00:15:08.475 "raid_level": "concat", 00:15:08.475 "superblock": true, 00:15:08.475 "num_base_bdevs": 4, 00:15:08.475 "num_base_bdevs_discovered": 2, 00:15:08.475 "num_base_bdevs_operational": 4, 00:15:08.475 "base_bdevs_list": [ 00:15:08.475 { 00:15:08.475 "name": "BaseBdev1", 00:15:08.475 "uuid": "19199580-42d8-11ef-9ade-d5fc5159efa5", 00:15:08.475 "is_configured": true, 00:15:08.475 "data_offset": 2048, 00:15:08.475 "data_size": 63488 00:15:08.475 }, 00:15:08.475 { 00:15:08.475 "name": "BaseBdev2", 00:15:08.475 "uuid": "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5", 00:15:08.475 "is_configured": true, 00:15:08.475 "data_offset": 2048, 00:15:08.475 "data_size": 63488 00:15:08.475 }, 00:15:08.475 { 00:15:08.475 "name": "BaseBdev3", 00:15:08.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.475 "is_configured": false, 00:15:08.475 "data_offset": 0, 00:15:08.475 "data_size": 0 00:15:08.475 }, 00:15:08.475 { 00:15:08.475 "name": "BaseBdev4", 00:15:08.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.475 "is_configured": false, 00:15:08.475 "data_offset": 0, 00:15:08.475 "data_size": 0 00:15:08.475 } 00:15:08.475 ] 00:15:08.475 }' 00:15:08.475 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.475 18:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.732 18:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:08.990 [2024-07-15 18:29:01.177507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.990 BaseBdev3 00:15:08.990 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:08.990 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:08.990 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:08.990 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:08.990 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:08.990 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:08.990 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.247 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:09.504 [ 00:15:09.504 { 00:15:09.504 "name": "BaseBdev3", 00:15:09.504 "aliases": [ 00:15:09.504 "1b65656b-42d8-11ef-9ade-d5fc5159efa5" 00:15:09.504 ], 00:15:09.504 "product_name": "Malloc disk", 00:15:09.504 "block_size": 512, 00:15:09.504 "num_blocks": 65536, 00:15:09.504 "uuid": "1b65656b-42d8-11ef-9ade-d5fc5159efa5", 00:15:09.504 "assigned_rate_limits": { 00:15:09.504 "rw_ios_per_sec": 0, 00:15:09.504 "rw_mbytes_per_sec": 0, 00:15:09.504 "r_mbytes_per_sec": 0, 00:15:09.504 "w_mbytes_per_sec": 0 00:15:09.504 }, 00:15:09.504 "claimed": true, 00:15:09.504 "claim_type": "exclusive_write", 00:15:09.504 "zoned": false, 00:15:09.504 "supported_io_types": { 00:15:09.504 "read": true, 00:15:09.504 "write": true, 00:15:09.504 "unmap": true, 00:15:09.504 "flush": true, 00:15:09.504 "reset": true, 00:15:09.504 "nvme_admin": false, 00:15:09.504 "nvme_io": false, 00:15:09.504 "nvme_io_md": false, 00:15:09.504 "write_zeroes": true, 00:15:09.504 "zcopy": true, 00:15:09.504 "get_zone_info": false, 00:15:09.504 "zone_management": false, 00:15:09.504 "zone_append": false, 00:15:09.504 "compare": false, 00:15:09.504 "compare_and_write": false, 00:15:09.504 "abort": true, 00:15:09.504 "seek_hole": false, 00:15:09.504 "seek_data": false, 00:15:09.505 "copy": true, 00:15:09.505 "nvme_iov_md": false 00:15:09.505 }, 00:15:09.505 "memory_domains": [ 00:15:09.505 { 00:15:09.505 "dma_device_id": "system", 00:15:09.505 "dma_device_type": 1 00:15:09.505 }, 00:15:09.505 { 00:15:09.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.505 "dma_device_type": 2 00:15:09.505 } 00:15:09.505 ], 00:15:09.505 "driver_specific": {} 00:15:09.505 } 00:15:09.505 ] 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.505 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.762 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.762 "name": "Existed_Raid", 00:15:09.762 "uuid": "1a0dbd7c-42d8-11ef-9ade-d5fc5159efa5", 00:15:09.762 "strip_size_kb": 64, 00:15:09.762 "state": "configuring", 00:15:09.762 "raid_level": "concat", 00:15:09.762 "superblock": true, 00:15:09.762 "num_base_bdevs": 4, 00:15:09.762 "num_base_bdevs_discovered": 3, 00:15:09.762 "num_base_bdevs_operational": 4, 00:15:09.762 "base_bdevs_list": [ 00:15:09.762 { 00:15:09.762 "name": "BaseBdev1", 00:15:09.762 "uuid": "19199580-42d8-11ef-9ade-d5fc5159efa5", 00:15:09.762 "is_configured": true, 00:15:09.762 "data_offset": 2048, 00:15:09.762 "data_size": 63488 00:15:09.762 }, 00:15:09.762 { 00:15:09.762 "name": "BaseBdev2", 00:15:09.762 "uuid": "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5", 00:15:09.762 "is_configured": true, 00:15:09.762 "data_offset": 2048, 00:15:09.762 "data_size": 63488 00:15:09.762 }, 00:15:09.762 { 00:15:09.762 "name": "BaseBdev3", 00:15:09.762 "uuid": "1b65656b-42d8-11ef-9ade-d5fc5159efa5", 00:15:09.762 "is_configured": true, 00:15:09.762 "data_offset": 2048, 00:15:09.762 "data_size": 63488 00:15:09.762 }, 00:15:09.762 { 00:15:09.762 "name": "BaseBdev4", 00:15:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.762 "is_configured": false, 00:15:09.762 "data_offset": 0, 00:15:09.762 "data_size": 0 00:15:09.762 } 00:15:09.763 ] 00:15:09.763 }' 00:15:09.763 18:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.763 18:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.021 18:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:10.279 [2024-07-15 18:29:02.549590] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.279 [2024-07-15 18:29:02.549661] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x40c8234a00 00:15:10.279 [2024-07-15 18:29:02.549668] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:10.279 [2024-07-15 18:29:02.549690] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x40c8297e20 00:15:10.279 [2024-07-15 18:29:02.549747] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x40c8234a00 00:15:10.279 [2024-07-15 18:29:02.549751] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x40c8234a00 00:15:10.279 [2024-07-15 18:29:02.549773] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.279 BaseBdev4 00:15:10.279 18:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:10.279 18:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:10.279 18:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.279 18:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:10.279 18:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.279 18:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.279 18:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:10.537 18:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:10.794 [ 00:15:10.794 { 00:15:10.794 "name": "BaseBdev4", 00:15:10.794 "aliases": [ 00:15:10.794 "1c36c291-42d8-11ef-9ade-d5fc5159efa5" 00:15:10.794 ], 00:15:10.794 "product_name": "Malloc disk", 00:15:10.794 "block_size": 512, 00:15:10.794 "num_blocks": 65536, 00:15:10.794 "uuid": "1c36c291-42d8-11ef-9ade-d5fc5159efa5", 00:15:10.794 "assigned_rate_limits": { 00:15:10.794 "rw_ios_per_sec": 0, 00:15:10.794 "rw_mbytes_per_sec": 0, 00:15:10.794 "r_mbytes_per_sec": 0, 00:15:10.794 "w_mbytes_per_sec": 0 00:15:10.794 }, 00:15:10.794 "claimed": true, 00:15:10.794 "claim_type": "exclusive_write", 00:15:10.794 "zoned": false, 00:15:10.794 "supported_io_types": { 00:15:10.794 "read": true, 00:15:10.794 "write": true, 00:15:10.794 "unmap": true, 00:15:10.794 "flush": true, 00:15:10.794 "reset": true, 00:15:10.794 "nvme_admin": false, 00:15:10.794 "nvme_io": false, 00:15:10.794 "nvme_io_md": false, 00:15:10.794 "write_zeroes": true, 00:15:10.794 "zcopy": true, 00:15:10.794 "get_zone_info": false, 00:15:10.794 "zone_management": false, 00:15:10.794 "zone_append": false, 00:15:10.795 "compare": false, 00:15:10.795 "compare_and_write": false, 00:15:10.795 "abort": true, 00:15:10.795 "seek_hole": false, 00:15:10.795 "seek_data": false, 00:15:10.795 "copy": true, 00:15:10.795 "nvme_iov_md": false 00:15:10.795 }, 00:15:10.795 "memory_domains": [ 00:15:10.795 { 00:15:10.795 "dma_device_id": "system", 00:15:10.795 "dma_device_type": 1 00:15:10.795 }, 00:15:10.795 { 00:15:10.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.795 "dma_device_type": 2 00:15:10.795 } 00:15:10.795 ], 00:15:10.795 "driver_specific": {} 00:15:10.795 } 00:15:10.795 ] 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.795 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.053 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.053 "name": "Existed_Raid", 00:15:11.053 "uuid": "1a0dbd7c-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.053 "strip_size_kb": 64, 00:15:11.053 "state": "online", 00:15:11.053 "raid_level": "concat", 00:15:11.053 "superblock": true, 00:15:11.053 "num_base_bdevs": 4, 00:15:11.053 "num_base_bdevs_discovered": 4, 00:15:11.053 "num_base_bdevs_operational": 4, 00:15:11.053 "base_bdevs_list": [ 00:15:11.053 { 00:15:11.053 "name": "BaseBdev1", 00:15:11.053 "uuid": "19199580-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.053 "is_configured": true, 00:15:11.053 "data_offset": 2048, 00:15:11.053 "data_size": 63488 00:15:11.053 }, 00:15:11.053 { 00:15:11.053 "name": "BaseBdev2", 00:15:11.053 "uuid": "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.053 "is_configured": true, 00:15:11.053 "data_offset": 2048, 00:15:11.053 "data_size": 63488 00:15:11.053 }, 00:15:11.053 { 00:15:11.053 "name": "BaseBdev3", 00:15:11.053 "uuid": "1b65656b-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.053 "is_configured": true, 00:15:11.053 "data_offset": 2048, 00:15:11.053 "data_size": 63488 00:15:11.053 }, 00:15:11.053 { 00:15:11.053 "name": "BaseBdev4", 00:15:11.053 "uuid": "1c36c291-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.053 "is_configured": true, 00:15:11.053 "data_offset": 2048, 00:15:11.053 "data_size": 63488 00:15:11.053 } 00:15:11.053 ] 00:15:11.053 }' 00:15:11.053 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.053 18:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.311 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:11.311 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:11.311 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:11.311 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:11.311 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:11.311 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:11.311 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:11.311 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:11.569 [2024-07-15 18:29:03.917629] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.569 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:11.569 "name": "Existed_Raid", 00:15:11.569 "aliases": [ 00:15:11.569 "1a0dbd7c-42d8-11ef-9ade-d5fc5159efa5" 00:15:11.569 ], 00:15:11.569 "product_name": "Raid Volume", 00:15:11.569 "block_size": 512, 00:15:11.569 "num_blocks": 253952, 00:15:11.569 "uuid": "1a0dbd7c-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.569 "assigned_rate_limits": { 00:15:11.569 "rw_ios_per_sec": 0, 00:15:11.569 "rw_mbytes_per_sec": 0, 00:15:11.569 "r_mbytes_per_sec": 0, 00:15:11.569 "w_mbytes_per_sec": 0 00:15:11.569 }, 00:15:11.569 "claimed": false, 00:15:11.569 "zoned": false, 00:15:11.569 "supported_io_types": { 00:15:11.569 "read": true, 00:15:11.569 "write": true, 00:15:11.569 "unmap": true, 00:15:11.569 "flush": true, 00:15:11.569 "reset": true, 00:15:11.569 "nvme_admin": false, 00:15:11.569 "nvme_io": false, 00:15:11.569 "nvme_io_md": false, 00:15:11.569 "write_zeroes": true, 00:15:11.569 "zcopy": false, 00:15:11.569 "get_zone_info": false, 00:15:11.569 "zone_management": false, 00:15:11.569 "zone_append": false, 00:15:11.569 "compare": false, 00:15:11.569 "compare_and_write": false, 00:15:11.569 "abort": false, 00:15:11.569 "seek_hole": false, 00:15:11.569 "seek_data": false, 00:15:11.569 "copy": false, 00:15:11.569 "nvme_iov_md": false 00:15:11.569 }, 00:15:11.569 "memory_domains": [ 00:15:11.569 { 00:15:11.569 "dma_device_id": "system", 00:15:11.569 "dma_device_type": 1 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.569 "dma_device_type": 2 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "dma_device_id": "system", 00:15:11.569 "dma_device_type": 1 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.569 "dma_device_type": 2 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "dma_device_id": "system", 00:15:11.569 "dma_device_type": 1 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.569 "dma_device_type": 2 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "dma_device_id": "system", 00:15:11.569 "dma_device_type": 1 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.569 "dma_device_type": 2 00:15:11.569 } 00:15:11.569 ], 00:15:11.569 "driver_specific": { 00:15:11.569 "raid": { 00:15:11.569 "uuid": "1a0dbd7c-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.569 "strip_size_kb": 64, 00:15:11.569 "state": "online", 00:15:11.569 "raid_level": "concat", 00:15:11.569 "superblock": true, 00:15:11.569 "num_base_bdevs": 4, 00:15:11.569 "num_base_bdevs_discovered": 4, 00:15:11.569 "num_base_bdevs_operational": 4, 00:15:11.569 "base_bdevs_list": [ 00:15:11.569 { 00:15:11.569 "name": "BaseBdev1", 00:15:11.569 "uuid": "19199580-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.569 "is_configured": true, 00:15:11.569 "data_offset": 2048, 00:15:11.569 "data_size": 63488 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "name": "BaseBdev2", 00:15:11.569 "uuid": "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.569 "is_configured": true, 00:15:11.569 "data_offset": 2048, 00:15:11.569 "data_size": 63488 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "name": "BaseBdev3", 00:15:11.569 "uuid": "1b65656b-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.569 "is_configured": true, 00:15:11.569 "data_offset": 2048, 00:15:11.569 "data_size": 63488 00:15:11.569 }, 00:15:11.569 { 00:15:11.569 "name": "BaseBdev4", 00:15:11.569 "uuid": "1c36c291-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.569 "is_configured": true, 00:15:11.569 "data_offset": 2048, 00:15:11.569 "data_size": 63488 00:15:11.569 } 00:15:11.569 ] 00:15:11.569 } 00:15:11.569 } 00:15:11.569 }' 00:15:11.569 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.569 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:11.569 BaseBdev2 00:15:11.569 BaseBdev3 00:15:11.569 BaseBdev4' 00:15:11.569 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.569 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:11.569 18:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.827 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.827 "name": "BaseBdev1", 00:15:11.827 "aliases": [ 00:15:11.827 "19199580-42d8-11ef-9ade-d5fc5159efa5" 00:15:11.827 ], 00:15:11.827 "product_name": "Malloc disk", 00:15:11.827 "block_size": 512, 00:15:11.827 "num_blocks": 65536, 00:15:11.827 "uuid": "19199580-42d8-11ef-9ade-d5fc5159efa5", 00:15:11.827 "assigned_rate_limits": { 00:15:11.827 "rw_ios_per_sec": 0, 00:15:11.827 "rw_mbytes_per_sec": 0, 00:15:11.827 "r_mbytes_per_sec": 0, 00:15:11.827 "w_mbytes_per_sec": 0 00:15:11.827 }, 00:15:11.827 "claimed": true, 00:15:11.827 "claim_type": "exclusive_write", 00:15:11.827 "zoned": false, 00:15:11.827 "supported_io_types": { 00:15:11.827 "read": true, 00:15:11.827 "write": true, 00:15:11.827 "unmap": true, 00:15:11.827 "flush": true, 00:15:11.827 "reset": true, 00:15:11.827 "nvme_admin": false, 00:15:11.827 "nvme_io": false, 00:15:11.827 "nvme_io_md": false, 00:15:11.827 "write_zeroes": true, 00:15:11.827 "zcopy": true, 00:15:11.827 "get_zone_info": false, 00:15:11.827 "zone_management": false, 00:15:11.827 "zone_append": false, 00:15:11.827 "compare": false, 00:15:11.827 "compare_and_write": false, 00:15:11.827 "abort": true, 00:15:11.828 "seek_hole": false, 00:15:11.828 "seek_data": false, 00:15:11.828 "copy": true, 00:15:11.828 "nvme_iov_md": false 00:15:11.828 }, 00:15:11.828 "memory_domains": [ 00:15:11.828 { 00:15:11.828 "dma_device_id": "system", 00:15:11.828 "dma_device_type": 1 00:15:11.828 }, 00:15:11.828 { 00:15:11.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.828 "dma_device_type": 2 00:15:11.828 } 00:15:11.828 ], 00:15:11.828 "driver_specific": {} 00:15:11.828 }' 00:15:11.828 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.828 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.828 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.828 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.828 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.828 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.828 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.085 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.085 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.085 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.085 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.085 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.085 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.085 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:12.085 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.343 "name": "BaseBdev2", 00:15:12.343 "aliases": [ 00:15:12.343 "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5" 00:15:12.343 ], 00:15:12.343 "product_name": "Malloc disk", 00:15:12.343 "block_size": 512, 00:15:12.343 "num_blocks": 65536, 00:15:12.343 "uuid": "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5", 00:15:12.343 "assigned_rate_limits": { 00:15:12.343 "rw_ios_per_sec": 0, 00:15:12.343 "rw_mbytes_per_sec": 0, 00:15:12.343 "r_mbytes_per_sec": 0, 00:15:12.343 "w_mbytes_per_sec": 0 00:15:12.343 }, 00:15:12.343 "claimed": true, 00:15:12.343 "claim_type": "exclusive_write", 00:15:12.343 "zoned": false, 00:15:12.343 "supported_io_types": { 00:15:12.343 "read": true, 00:15:12.343 "write": true, 00:15:12.343 "unmap": true, 00:15:12.343 "flush": true, 00:15:12.343 "reset": true, 00:15:12.343 "nvme_admin": false, 00:15:12.343 "nvme_io": false, 00:15:12.343 "nvme_io_md": false, 00:15:12.343 "write_zeroes": true, 00:15:12.343 "zcopy": true, 00:15:12.343 "get_zone_info": false, 00:15:12.343 "zone_management": false, 00:15:12.343 "zone_append": false, 00:15:12.343 "compare": false, 00:15:12.343 "compare_and_write": false, 00:15:12.343 "abort": true, 00:15:12.343 "seek_hole": false, 00:15:12.343 "seek_data": false, 00:15:12.343 "copy": true, 00:15:12.343 "nvme_iov_md": false 00:15:12.343 }, 00:15:12.343 "memory_domains": [ 00:15:12.343 { 00:15:12.343 "dma_device_id": "system", 00:15:12.343 "dma_device_type": 1 00:15:12.343 }, 00:15:12.343 { 00:15:12.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.343 "dma_device_type": 2 00:15:12.343 } 00:15:12.343 ], 00:15:12.343 "driver_specific": {} 00:15:12.343 }' 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:12.343 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.602 "name": "BaseBdev3", 00:15:12.602 "aliases": [ 00:15:12.602 "1b65656b-42d8-11ef-9ade-d5fc5159efa5" 00:15:12.602 ], 00:15:12.602 "product_name": "Malloc disk", 00:15:12.602 "block_size": 512, 00:15:12.602 "num_blocks": 65536, 00:15:12.602 "uuid": "1b65656b-42d8-11ef-9ade-d5fc5159efa5", 00:15:12.602 "assigned_rate_limits": { 00:15:12.602 "rw_ios_per_sec": 0, 00:15:12.602 "rw_mbytes_per_sec": 0, 00:15:12.602 "r_mbytes_per_sec": 0, 00:15:12.602 "w_mbytes_per_sec": 0 00:15:12.602 }, 00:15:12.602 "claimed": true, 00:15:12.602 "claim_type": "exclusive_write", 00:15:12.602 "zoned": false, 00:15:12.602 "supported_io_types": { 00:15:12.602 "read": true, 00:15:12.602 "write": true, 00:15:12.602 "unmap": true, 00:15:12.602 "flush": true, 00:15:12.602 "reset": true, 00:15:12.602 "nvme_admin": false, 00:15:12.602 "nvme_io": false, 00:15:12.602 "nvme_io_md": false, 00:15:12.602 "write_zeroes": true, 00:15:12.602 "zcopy": true, 00:15:12.602 "get_zone_info": false, 00:15:12.602 "zone_management": false, 00:15:12.602 "zone_append": false, 00:15:12.602 "compare": false, 00:15:12.602 "compare_and_write": false, 00:15:12.602 "abort": true, 00:15:12.602 "seek_hole": false, 00:15:12.602 "seek_data": false, 00:15:12.602 "copy": true, 00:15:12.602 "nvme_iov_md": false 00:15:12.602 }, 00:15:12.602 "memory_domains": [ 00:15:12.602 { 00:15:12.602 "dma_device_id": "system", 00:15:12.602 "dma_device_type": 1 00:15:12.602 }, 00:15:12.602 { 00:15:12.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.602 "dma_device_type": 2 00:15:12.602 } 00:15:12.602 ], 00:15:12.602 "driver_specific": {} 00:15:12.602 }' 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:12.602 18:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.860 "name": "BaseBdev4", 00:15:12.860 "aliases": [ 00:15:12.860 "1c36c291-42d8-11ef-9ade-d5fc5159efa5" 00:15:12.860 ], 00:15:12.860 "product_name": "Malloc disk", 00:15:12.860 "block_size": 512, 00:15:12.860 "num_blocks": 65536, 00:15:12.860 "uuid": "1c36c291-42d8-11ef-9ade-d5fc5159efa5", 00:15:12.860 "assigned_rate_limits": { 00:15:12.860 "rw_ios_per_sec": 0, 00:15:12.860 "rw_mbytes_per_sec": 0, 00:15:12.860 "r_mbytes_per_sec": 0, 00:15:12.860 "w_mbytes_per_sec": 0 00:15:12.860 }, 00:15:12.860 "claimed": true, 00:15:12.860 "claim_type": "exclusive_write", 00:15:12.860 "zoned": false, 00:15:12.860 "supported_io_types": { 00:15:12.860 "read": true, 00:15:12.860 "write": true, 00:15:12.860 "unmap": true, 00:15:12.860 "flush": true, 00:15:12.860 "reset": true, 00:15:12.860 "nvme_admin": false, 00:15:12.860 "nvme_io": false, 00:15:12.860 "nvme_io_md": false, 00:15:12.860 "write_zeroes": true, 00:15:12.860 "zcopy": true, 00:15:12.860 "get_zone_info": false, 00:15:12.860 "zone_management": false, 00:15:12.860 "zone_append": false, 00:15:12.860 "compare": false, 00:15:12.860 "compare_and_write": false, 00:15:12.860 "abort": true, 00:15:12.860 "seek_hole": false, 00:15:12.860 "seek_data": false, 00:15:12.860 "copy": true, 00:15:12.860 "nvme_iov_md": false 00:15:12.860 }, 00:15:12.860 "memory_domains": [ 00:15:12.860 { 00:15:12.860 "dma_device_id": "system", 00:15:12.860 "dma_device_type": 1 00:15:12.860 }, 00:15:12.860 { 00:15:12.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.860 "dma_device_type": 2 00:15:12.860 } 00:15:12.860 ], 00:15:12.860 "driver_specific": {} 00:15:12.860 }' 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.860 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.861 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.861 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.861 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:13.118 [2024-07-15 18:29:05.401681] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:13.118 [2024-07-15 18:29:05.401709] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.118 [2024-07-15 18:29:05.401725] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.118 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.376 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.376 "name": "Existed_Raid", 00:15:13.376 "uuid": "1a0dbd7c-42d8-11ef-9ade-d5fc5159efa5", 00:15:13.376 "strip_size_kb": 64, 00:15:13.376 "state": "offline", 00:15:13.376 "raid_level": "concat", 00:15:13.376 "superblock": true, 00:15:13.376 "num_base_bdevs": 4, 00:15:13.376 "num_base_bdevs_discovered": 3, 00:15:13.376 "num_base_bdevs_operational": 3, 00:15:13.376 "base_bdevs_list": [ 00:15:13.376 { 00:15:13.376 "name": null, 00:15:13.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.376 "is_configured": false, 00:15:13.376 "data_offset": 2048, 00:15:13.376 "data_size": 63488 00:15:13.376 }, 00:15:13.376 { 00:15:13.376 "name": "BaseBdev2", 00:15:13.376 "uuid": "1a8e88d6-42d8-11ef-9ade-d5fc5159efa5", 00:15:13.376 "is_configured": true, 00:15:13.376 "data_offset": 2048, 00:15:13.376 "data_size": 63488 00:15:13.376 }, 00:15:13.376 { 00:15:13.376 "name": "BaseBdev3", 00:15:13.376 "uuid": "1b65656b-42d8-11ef-9ade-d5fc5159efa5", 00:15:13.376 "is_configured": true, 00:15:13.376 "data_offset": 2048, 00:15:13.376 "data_size": 63488 00:15:13.376 }, 00:15:13.376 { 00:15:13.376 "name": "BaseBdev4", 00:15:13.376 "uuid": "1c36c291-42d8-11ef-9ade-d5fc5159efa5", 00:15:13.376 "is_configured": true, 00:15:13.376 "data_offset": 2048, 00:15:13.376 "data_size": 63488 00:15:13.376 } 00:15:13.376 ] 00:15:13.376 }' 00:15:13.376 18:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.376 18:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.942 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:13.942 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:13.942 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.942 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:13.942 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:13.942 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.942 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:14.199 [2024-07-15 18:29:06.583625] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.456 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:14.456 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:14.456 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.456 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:14.714 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:14.714 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:14.714 18:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:14.970 [2024-07-15 18:29:07.120085] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:14.970 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:14.970 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:14.970 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:14.970 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.277 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:15.277 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.277 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:15.277 [2024-07-15 18:29:07.608312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:15.277 [2024-07-15 18:29:07.608360] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x40c8234a00 name Existed_Raid, state offline 00:15:15.277 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:15.277 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.277 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.277 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:15.533 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:15.533 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:15.533 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:15.533 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:15.533 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:15.533 18:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.790 BaseBdev2 00:15:15.790 18:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:15.790 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:15.790 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:15.790 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:15.790 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:15.790 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:15.790 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.047 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.303 [ 00:15:16.303 { 00:15:16.303 "name": "BaseBdev2", 00:15:16.303 "aliases": [ 00:15:16.303 "1f891a02-42d8-11ef-9ade-d5fc5159efa5" 00:15:16.303 ], 00:15:16.303 "product_name": "Malloc disk", 00:15:16.303 "block_size": 512, 00:15:16.303 "num_blocks": 65536, 00:15:16.303 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:16.303 "assigned_rate_limits": { 00:15:16.303 "rw_ios_per_sec": 0, 00:15:16.303 "rw_mbytes_per_sec": 0, 00:15:16.303 "r_mbytes_per_sec": 0, 00:15:16.303 "w_mbytes_per_sec": 0 00:15:16.303 }, 00:15:16.303 "claimed": false, 00:15:16.303 "zoned": false, 00:15:16.303 "supported_io_types": { 00:15:16.303 "read": true, 00:15:16.303 "write": true, 00:15:16.303 "unmap": true, 00:15:16.303 "flush": true, 00:15:16.303 "reset": true, 00:15:16.303 "nvme_admin": false, 00:15:16.303 "nvme_io": false, 00:15:16.303 "nvme_io_md": false, 00:15:16.303 "write_zeroes": true, 00:15:16.303 "zcopy": true, 00:15:16.303 "get_zone_info": false, 00:15:16.303 "zone_management": false, 00:15:16.303 "zone_append": false, 00:15:16.303 "compare": false, 00:15:16.303 "compare_and_write": false, 00:15:16.303 "abort": true, 00:15:16.303 "seek_hole": false, 00:15:16.303 "seek_data": false, 00:15:16.303 "copy": true, 00:15:16.303 "nvme_iov_md": false 00:15:16.303 }, 00:15:16.303 "memory_domains": [ 00:15:16.303 { 00:15:16.303 "dma_device_id": "system", 00:15:16.303 "dma_device_type": 1 00:15:16.303 }, 00:15:16.303 { 00:15:16.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.303 "dma_device_type": 2 00:15:16.303 } 00:15:16.303 ], 00:15:16.303 "driver_specific": {} 00:15:16.303 } 00:15:16.303 ] 00:15:16.303 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:16.303 18:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:16.303 18:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:16.303 18:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.561 BaseBdev3 00:15:16.561 18:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:16.561 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:16.561 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:16.561 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:16.561 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:16.561 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:16.561 18:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.818 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:17.074 [ 00:15:17.074 { 00:15:17.074 "name": "BaseBdev3", 00:15:17.074 "aliases": [ 00:15:17.074 "1ff83149-42d8-11ef-9ade-d5fc5159efa5" 00:15:17.074 ], 00:15:17.074 "product_name": "Malloc disk", 00:15:17.074 "block_size": 512, 00:15:17.074 "num_blocks": 65536, 00:15:17.074 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:17.074 "assigned_rate_limits": { 00:15:17.074 "rw_ios_per_sec": 0, 00:15:17.074 "rw_mbytes_per_sec": 0, 00:15:17.074 "r_mbytes_per_sec": 0, 00:15:17.074 "w_mbytes_per_sec": 0 00:15:17.074 }, 00:15:17.074 "claimed": false, 00:15:17.074 "zoned": false, 00:15:17.074 "supported_io_types": { 00:15:17.074 "read": true, 00:15:17.074 "write": true, 00:15:17.074 "unmap": true, 00:15:17.074 "flush": true, 00:15:17.074 "reset": true, 00:15:17.074 "nvme_admin": false, 00:15:17.074 "nvme_io": false, 00:15:17.074 "nvme_io_md": false, 00:15:17.074 "write_zeroes": true, 00:15:17.074 "zcopy": true, 00:15:17.074 "get_zone_info": false, 00:15:17.074 "zone_management": false, 00:15:17.074 "zone_append": false, 00:15:17.074 "compare": false, 00:15:17.074 "compare_and_write": false, 00:15:17.074 "abort": true, 00:15:17.074 "seek_hole": false, 00:15:17.074 "seek_data": false, 00:15:17.074 "copy": true, 00:15:17.074 "nvme_iov_md": false 00:15:17.075 }, 00:15:17.075 "memory_domains": [ 00:15:17.075 { 00:15:17.075 "dma_device_id": "system", 00:15:17.075 "dma_device_type": 1 00:15:17.075 }, 00:15:17.075 { 00:15:17.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.075 "dma_device_type": 2 00:15:17.075 } 00:15:17.075 ], 00:15:17.075 "driver_specific": {} 00:15:17.075 } 00:15:17.075 ] 00:15:17.075 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:17.075 18:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:17.075 18:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:17.075 18:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:17.332 BaseBdev4 00:15:17.332 18:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:17.332 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:17.332 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:17.332 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:17.332 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:17.332 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:17.332 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.589 18:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:17.847 [ 00:15:17.847 { 00:15:17.847 "name": "BaseBdev4", 00:15:17.847 "aliases": [ 00:15:17.847 "206cc743-42d8-11ef-9ade-d5fc5159efa5" 00:15:17.847 ], 00:15:17.847 "product_name": "Malloc disk", 00:15:17.847 "block_size": 512, 00:15:17.847 "num_blocks": 65536, 00:15:17.847 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:17.847 "assigned_rate_limits": { 00:15:17.847 "rw_ios_per_sec": 0, 00:15:17.847 "rw_mbytes_per_sec": 0, 00:15:17.847 "r_mbytes_per_sec": 0, 00:15:17.847 "w_mbytes_per_sec": 0 00:15:17.847 }, 00:15:17.847 "claimed": false, 00:15:17.847 "zoned": false, 00:15:17.847 "supported_io_types": { 00:15:17.847 "read": true, 00:15:17.847 "write": true, 00:15:17.847 "unmap": true, 00:15:17.847 "flush": true, 00:15:17.847 "reset": true, 00:15:17.847 "nvme_admin": false, 00:15:17.847 "nvme_io": false, 00:15:17.847 "nvme_io_md": false, 00:15:17.847 "write_zeroes": true, 00:15:17.847 "zcopy": true, 00:15:17.847 "get_zone_info": false, 00:15:17.847 "zone_management": false, 00:15:17.847 "zone_append": false, 00:15:17.847 "compare": false, 00:15:17.847 "compare_and_write": false, 00:15:17.847 "abort": true, 00:15:17.847 "seek_hole": false, 00:15:17.847 "seek_data": false, 00:15:17.847 "copy": true, 00:15:17.847 "nvme_iov_md": false 00:15:17.847 }, 00:15:17.847 "memory_domains": [ 00:15:17.847 { 00:15:17.847 "dma_device_id": "system", 00:15:17.847 "dma_device_type": 1 00:15:17.847 }, 00:15:17.847 { 00:15:17.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.847 "dma_device_type": 2 00:15:17.847 } 00:15:17.847 ], 00:15:17.847 "driver_specific": {} 00:15:17.847 } 00:15:17.847 ] 00:15:17.847 18:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:17.847 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:17.847 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:17.847 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:18.105 [2024-07-15 18:29:10.366419] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.105 [2024-07-15 18:29:10.366476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.105 [2024-07-15 18:29:10.366486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.105 [2024-07-15 18:29:10.367047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.105 [2024-07-15 18:29:10.367058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.105 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.362 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.362 "name": "Existed_Raid", 00:15:18.362 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:18.362 "strip_size_kb": 64, 00:15:18.362 "state": "configuring", 00:15:18.362 "raid_level": "concat", 00:15:18.362 "superblock": true, 00:15:18.362 "num_base_bdevs": 4, 00:15:18.362 "num_base_bdevs_discovered": 3, 00:15:18.362 "num_base_bdevs_operational": 4, 00:15:18.362 "base_bdevs_list": [ 00:15:18.362 { 00:15:18.362 "name": "BaseBdev1", 00:15:18.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.362 "is_configured": false, 00:15:18.362 "data_offset": 0, 00:15:18.362 "data_size": 0 00:15:18.362 }, 00:15:18.362 { 00:15:18.362 "name": "BaseBdev2", 00:15:18.362 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:18.362 "is_configured": true, 00:15:18.362 "data_offset": 2048, 00:15:18.362 "data_size": 63488 00:15:18.362 }, 00:15:18.362 { 00:15:18.362 "name": "BaseBdev3", 00:15:18.362 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:18.362 "is_configured": true, 00:15:18.362 "data_offset": 2048, 00:15:18.362 "data_size": 63488 00:15:18.362 }, 00:15:18.362 { 00:15:18.362 "name": "BaseBdev4", 00:15:18.362 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:18.362 "is_configured": true, 00:15:18.362 "data_offset": 2048, 00:15:18.362 "data_size": 63488 00:15:18.362 } 00:15:18.362 ] 00:15:18.362 }' 00:15:18.362 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.362 18:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.620 18:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:18.877 [2024-07-15 18:29:11.222467] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.877 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.133 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.133 "name": "Existed_Raid", 00:15:19.133 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:19.133 "strip_size_kb": 64, 00:15:19.133 "state": "configuring", 00:15:19.133 "raid_level": "concat", 00:15:19.133 "superblock": true, 00:15:19.133 "num_base_bdevs": 4, 00:15:19.133 "num_base_bdevs_discovered": 2, 00:15:19.133 "num_base_bdevs_operational": 4, 00:15:19.133 "base_bdevs_list": [ 00:15:19.133 { 00:15:19.133 "name": "BaseBdev1", 00:15:19.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.133 "is_configured": false, 00:15:19.133 "data_offset": 0, 00:15:19.133 "data_size": 0 00:15:19.133 }, 00:15:19.133 { 00:15:19.133 "name": null, 00:15:19.133 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:19.133 "is_configured": false, 00:15:19.133 "data_offset": 2048, 00:15:19.133 "data_size": 63488 00:15:19.133 }, 00:15:19.133 { 00:15:19.133 "name": "BaseBdev3", 00:15:19.133 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:19.133 "is_configured": true, 00:15:19.133 "data_offset": 2048, 00:15:19.133 "data_size": 63488 00:15:19.133 }, 00:15:19.133 { 00:15:19.133 "name": "BaseBdev4", 00:15:19.133 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:19.133 "is_configured": true, 00:15:19.133 "data_offset": 2048, 00:15:19.133 "data_size": 63488 00:15:19.133 } 00:15:19.133 ] 00:15:19.133 }' 00:15:19.134 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.134 18:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.698 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.698 18:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.956 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:19.956 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.214 [2024-07-15 18:29:12.374716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.214 BaseBdev1 00:15:20.214 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:20.214 18:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:20.214 18:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:20.214 18:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:20.214 18:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:20.214 18:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:20.214 18:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:20.501 18:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.776 [ 00:15:20.776 { 00:15:20.776 "name": "BaseBdev1", 00:15:20.776 "aliases": [ 00:15:20.776 "2211f3d0-42d8-11ef-9ade-d5fc5159efa5" 00:15:20.776 ], 00:15:20.776 "product_name": "Malloc disk", 00:15:20.776 "block_size": 512, 00:15:20.776 "num_blocks": 65536, 00:15:20.776 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:20.776 "assigned_rate_limits": { 00:15:20.776 "rw_ios_per_sec": 0, 00:15:20.776 "rw_mbytes_per_sec": 0, 00:15:20.776 "r_mbytes_per_sec": 0, 00:15:20.776 "w_mbytes_per_sec": 0 00:15:20.776 }, 00:15:20.776 "claimed": true, 00:15:20.776 "claim_type": "exclusive_write", 00:15:20.776 "zoned": false, 00:15:20.776 "supported_io_types": { 00:15:20.776 "read": true, 00:15:20.776 "write": true, 00:15:20.776 "unmap": true, 00:15:20.776 "flush": true, 00:15:20.776 "reset": true, 00:15:20.776 "nvme_admin": false, 00:15:20.776 "nvme_io": false, 00:15:20.776 "nvme_io_md": false, 00:15:20.776 "write_zeroes": true, 00:15:20.776 "zcopy": true, 00:15:20.776 "get_zone_info": false, 00:15:20.776 "zone_management": false, 00:15:20.776 "zone_append": false, 00:15:20.776 "compare": false, 00:15:20.776 "compare_and_write": false, 00:15:20.776 "abort": true, 00:15:20.776 "seek_hole": false, 00:15:20.776 "seek_data": false, 00:15:20.776 "copy": true, 00:15:20.777 "nvme_iov_md": false 00:15:20.777 }, 00:15:20.777 "memory_domains": [ 00:15:20.777 { 00:15:20.777 "dma_device_id": "system", 00:15:20.777 "dma_device_type": 1 00:15:20.777 }, 00:15:20.777 { 00:15:20.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.777 "dma_device_type": 2 00:15:20.777 } 00:15:20.777 ], 00:15:20.777 "driver_specific": {} 00:15:20.777 } 00:15:20.777 ] 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.777 18:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.034 18:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.034 "name": "Existed_Raid", 00:15:21.034 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:21.034 "strip_size_kb": 64, 00:15:21.034 "state": "configuring", 00:15:21.034 "raid_level": "concat", 00:15:21.034 "superblock": true, 00:15:21.034 "num_base_bdevs": 4, 00:15:21.034 "num_base_bdevs_discovered": 3, 00:15:21.034 "num_base_bdevs_operational": 4, 00:15:21.034 "base_bdevs_list": [ 00:15:21.034 { 00:15:21.034 "name": "BaseBdev1", 00:15:21.034 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:21.034 "is_configured": true, 00:15:21.034 "data_offset": 2048, 00:15:21.034 "data_size": 63488 00:15:21.034 }, 00:15:21.034 { 00:15:21.034 "name": null, 00:15:21.034 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:21.034 "is_configured": false, 00:15:21.034 "data_offset": 2048, 00:15:21.034 "data_size": 63488 00:15:21.034 }, 00:15:21.034 { 00:15:21.035 "name": "BaseBdev3", 00:15:21.035 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:21.035 "is_configured": true, 00:15:21.035 "data_offset": 2048, 00:15:21.035 "data_size": 63488 00:15:21.035 }, 00:15:21.035 { 00:15:21.035 "name": "BaseBdev4", 00:15:21.035 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:21.035 "is_configured": true, 00:15:21.035 "data_offset": 2048, 00:15:21.035 "data_size": 63488 00:15:21.035 } 00:15:21.035 ] 00:15:21.035 }' 00:15:21.035 18:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.035 18:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.292 18:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.292 18:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:21.550 18:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:21.550 18:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:21.808 [2024-07-15 18:29:13.986686] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.808 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.067 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.067 "name": "Existed_Raid", 00:15:22.067 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:22.067 "strip_size_kb": 64, 00:15:22.067 "state": "configuring", 00:15:22.067 "raid_level": "concat", 00:15:22.067 "superblock": true, 00:15:22.067 "num_base_bdevs": 4, 00:15:22.067 "num_base_bdevs_discovered": 2, 00:15:22.067 "num_base_bdevs_operational": 4, 00:15:22.067 "base_bdevs_list": [ 00:15:22.067 { 00:15:22.067 "name": "BaseBdev1", 00:15:22.067 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:22.067 "is_configured": true, 00:15:22.067 "data_offset": 2048, 00:15:22.067 "data_size": 63488 00:15:22.067 }, 00:15:22.067 { 00:15:22.067 "name": null, 00:15:22.067 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:22.067 "is_configured": false, 00:15:22.067 "data_offset": 2048, 00:15:22.067 "data_size": 63488 00:15:22.067 }, 00:15:22.067 { 00:15:22.067 "name": null, 00:15:22.067 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:22.067 "is_configured": false, 00:15:22.067 "data_offset": 2048, 00:15:22.067 "data_size": 63488 00:15:22.067 }, 00:15:22.067 { 00:15:22.067 "name": "BaseBdev4", 00:15:22.067 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:22.067 "is_configured": true, 00:15:22.067 "data_offset": 2048, 00:15:22.067 "data_size": 63488 00:15:22.067 } 00:15:22.067 ] 00:15:22.067 }' 00:15:22.067 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.067 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.325 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.325 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:22.583 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:22.583 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:22.842 [2024-07-15 18:29:15.174777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.842 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.100 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.100 "name": "Existed_Raid", 00:15:23.100 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:23.100 "strip_size_kb": 64, 00:15:23.100 "state": "configuring", 00:15:23.100 "raid_level": "concat", 00:15:23.100 "superblock": true, 00:15:23.100 "num_base_bdevs": 4, 00:15:23.100 "num_base_bdevs_discovered": 3, 00:15:23.100 "num_base_bdevs_operational": 4, 00:15:23.100 "base_bdevs_list": [ 00:15:23.100 { 00:15:23.100 "name": "BaseBdev1", 00:15:23.100 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:23.100 "is_configured": true, 00:15:23.100 "data_offset": 2048, 00:15:23.100 "data_size": 63488 00:15:23.100 }, 00:15:23.100 { 00:15:23.100 "name": null, 00:15:23.100 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:23.100 "is_configured": false, 00:15:23.100 "data_offset": 2048, 00:15:23.100 "data_size": 63488 00:15:23.100 }, 00:15:23.100 { 00:15:23.100 "name": "BaseBdev3", 00:15:23.100 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:23.100 "is_configured": true, 00:15:23.100 "data_offset": 2048, 00:15:23.100 "data_size": 63488 00:15:23.100 }, 00:15:23.100 { 00:15:23.100 "name": "BaseBdev4", 00:15:23.100 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:23.100 "is_configured": true, 00:15:23.100 "data_offset": 2048, 00:15:23.100 "data_size": 63488 00:15:23.100 } 00:15:23.100 ] 00:15:23.100 }' 00:15:23.100 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.100 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.666 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.666 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:23.666 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:23.666 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:23.923 [2024-07-15 18:29:16.242865] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.924 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.180 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.181 "name": "Existed_Raid", 00:15:24.181 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:24.181 "strip_size_kb": 64, 00:15:24.181 "state": "configuring", 00:15:24.181 "raid_level": "concat", 00:15:24.181 "superblock": true, 00:15:24.181 "num_base_bdevs": 4, 00:15:24.181 "num_base_bdevs_discovered": 2, 00:15:24.181 "num_base_bdevs_operational": 4, 00:15:24.181 "base_bdevs_list": [ 00:15:24.181 { 00:15:24.181 "name": null, 00:15:24.181 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:24.181 "is_configured": false, 00:15:24.181 "data_offset": 2048, 00:15:24.181 "data_size": 63488 00:15:24.181 }, 00:15:24.181 { 00:15:24.181 "name": null, 00:15:24.181 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:24.181 "is_configured": false, 00:15:24.181 "data_offset": 2048, 00:15:24.181 "data_size": 63488 00:15:24.181 }, 00:15:24.181 { 00:15:24.181 "name": "BaseBdev3", 00:15:24.181 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:24.181 "is_configured": true, 00:15:24.181 "data_offset": 2048, 00:15:24.181 "data_size": 63488 00:15:24.181 }, 00:15:24.181 { 00:15:24.181 "name": "BaseBdev4", 00:15:24.181 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:24.181 "is_configured": true, 00:15:24.181 "data_offset": 2048, 00:15:24.181 "data_size": 63488 00:15:24.181 } 00:15:24.181 ] 00:15:24.181 }' 00:15:24.181 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.181 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.745 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.745 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:24.745 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:24.745 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:25.002 [2024-07-15 18:29:17.328711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.002 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.264 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.264 "name": "Existed_Raid", 00:15:25.264 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:25.264 "strip_size_kb": 64, 00:15:25.264 "state": "configuring", 00:15:25.264 "raid_level": "concat", 00:15:25.264 "superblock": true, 00:15:25.264 "num_base_bdevs": 4, 00:15:25.264 "num_base_bdevs_discovered": 3, 00:15:25.264 "num_base_bdevs_operational": 4, 00:15:25.264 "base_bdevs_list": [ 00:15:25.264 { 00:15:25.264 "name": null, 00:15:25.264 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:25.264 "is_configured": false, 00:15:25.264 "data_offset": 2048, 00:15:25.264 "data_size": 63488 00:15:25.264 }, 00:15:25.264 { 00:15:25.264 "name": "BaseBdev2", 00:15:25.264 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:25.264 "is_configured": true, 00:15:25.264 "data_offset": 2048, 00:15:25.264 "data_size": 63488 00:15:25.264 }, 00:15:25.264 { 00:15:25.264 "name": "BaseBdev3", 00:15:25.264 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:25.264 "is_configured": true, 00:15:25.264 "data_offset": 2048, 00:15:25.264 "data_size": 63488 00:15:25.264 }, 00:15:25.264 { 00:15:25.264 "name": "BaseBdev4", 00:15:25.264 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:25.264 "is_configured": true, 00:15:25.264 "data_offset": 2048, 00:15:25.264 "data_size": 63488 00:15:25.264 } 00:15:25.264 ] 00:15:25.264 }' 00:15:25.264 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.264 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.835 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.835 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:25.835 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:25.835 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.835 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:26.092 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2211f3d0-42d8-11ef-9ade-d5fc5159efa5 00:15:26.349 [2024-07-15 18:29:18.712953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:26.349 [2024-07-15 18:29:18.713016] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x40c8234f00 00:15:26.349 [2024-07-15 18:29:18.713022] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:26.349 [2024-07-15 18:29:18.713044] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x40c8297e20 00:15:26.349 [2024-07-15 18:29:18.713093] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x40c8234f00 00:15:26.349 [2024-07-15 18:29:18.713097] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x40c8234f00 00:15:26.349 [2024-07-15 18:29:18.713118] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.349 NewBaseBdev 00:15:26.349 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:26.349 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:26.349 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:26.349 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:26.349 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:26.349 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:26.349 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.606 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:26.863 [ 00:15:26.863 { 00:15:26.863 "name": "NewBaseBdev", 00:15:26.863 "aliases": [ 00:15:26.863 "2211f3d0-42d8-11ef-9ade-d5fc5159efa5" 00:15:26.863 ], 00:15:26.863 "product_name": "Malloc disk", 00:15:26.863 "block_size": 512, 00:15:26.863 "num_blocks": 65536, 00:15:26.863 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:26.863 "assigned_rate_limits": { 00:15:26.863 "rw_ios_per_sec": 0, 00:15:26.863 "rw_mbytes_per_sec": 0, 00:15:26.863 "r_mbytes_per_sec": 0, 00:15:26.863 "w_mbytes_per_sec": 0 00:15:26.863 }, 00:15:26.863 "claimed": true, 00:15:26.863 "claim_type": "exclusive_write", 00:15:26.863 "zoned": false, 00:15:26.863 "supported_io_types": { 00:15:26.863 "read": true, 00:15:26.863 "write": true, 00:15:26.863 "unmap": true, 00:15:26.863 "flush": true, 00:15:26.863 "reset": true, 00:15:26.863 "nvme_admin": false, 00:15:26.863 "nvme_io": false, 00:15:26.863 "nvme_io_md": false, 00:15:26.863 "write_zeroes": true, 00:15:26.863 "zcopy": true, 00:15:26.863 "get_zone_info": false, 00:15:26.863 "zone_management": false, 00:15:26.863 "zone_append": false, 00:15:26.863 "compare": false, 00:15:26.863 "compare_and_write": false, 00:15:26.863 "abort": true, 00:15:26.863 "seek_hole": false, 00:15:26.863 "seek_data": false, 00:15:26.863 "copy": true, 00:15:26.863 "nvme_iov_md": false 00:15:26.863 }, 00:15:26.863 "memory_domains": [ 00:15:26.863 { 00:15:26.863 "dma_device_id": "system", 00:15:26.863 "dma_device_type": 1 00:15:26.863 }, 00:15:26.863 { 00:15:26.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.863 "dma_device_type": 2 00:15:26.863 } 00:15:26.863 ], 00:15:26.863 "driver_specific": {} 00:15:26.863 } 00:15:26.863 ] 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.863 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.121 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.121 "name": "Existed_Raid", 00:15:27.121 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.121 "strip_size_kb": 64, 00:15:27.121 "state": "online", 00:15:27.121 "raid_level": "concat", 00:15:27.121 "superblock": true, 00:15:27.121 "num_base_bdevs": 4, 00:15:27.121 "num_base_bdevs_discovered": 4, 00:15:27.121 "num_base_bdevs_operational": 4, 00:15:27.121 "base_bdevs_list": [ 00:15:27.121 { 00:15:27.121 "name": "NewBaseBdev", 00:15:27.121 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.121 "is_configured": true, 00:15:27.121 "data_offset": 2048, 00:15:27.121 "data_size": 63488 00:15:27.121 }, 00:15:27.121 { 00:15:27.121 "name": "BaseBdev2", 00:15:27.121 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.121 "is_configured": true, 00:15:27.121 "data_offset": 2048, 00:15:27.121 "data_size": 63488 00:15:27.121 }, 00:15:27.121 { 00:15:27.121 "name": "BaseBdev3", 00:15:27.121 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.121 "is_configured": true, 00:15:27.121 "data_offset": 2048, 00:15:27.121 "data_size": 63488 00:15:27.121 }, 00:15:27.121 { 00:15:27.121 "name": "BaseBdev4", 00:15:27.121 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.121 "is_configured": true, 00:15:27.121 "data_offset": 2048, 00:15:27.121 "data_size": 63488 00:15:27.121 } 00:15:27.121 ] 00:15:27.121 }' 00:15:27.121 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.121 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.380 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.380 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:27.380 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:27.380 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:27.380 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:27.380 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:27.380 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:27.380 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:27.638 [2024-07-15 18:29:19.968947] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.638 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:27.638 "name": "Existed_Raid", 00:15:27.638 "aliases": [ 00:15:27.638 "20df87f1-42d8-11ef-9ade-d5fc5159efa5" 00:15:27.638 ], 00:15:27.638 "product_name": "Raid Volume", 00:15:27.638 "block_size": 512, 00:15:27.638 "num_blocks": 253952, 00:15:27.638 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.638 "assigned_rate_limits": { 00:15:27.638 "rw_ios_per_sec": 0, 00:15:27.638 "rw_mbytes_per_sec": 0, 00:15:27.638 "r_mbytes_per_sec": 0, 00:15:27.638 "w_mbytes_per_sec": 0 00:15:27.638 }, 00:15:27.638 "claimed": false, 00:15:27.638 "zoned": false, 00:15:27.638 "supported_io_types": { 00:15:27.638 "read": true, 00:15:27.638 "write": true, 00:15:27.638 "unmap": true, 00:15:27.638 "flush": true, 00:15:27.638 "reset": true, 00:15:27.638 "nvme_admin": false, 00:15:27.638 "nvme_io": false, 00:15:27.638 "nvme_io_md": false, 00:15:27.638 "write_zeroes": true, 00:15:27.638 "zcopy": false, 00:15:27.638 "get_zone_info": false, 00:15:27.638 "zone_management": false, 00:15:27.638 "zone_append": false, 00:15:27.638 "compare": false, 00:15:27.638 "compare_and_write": false, 00:15:27.638 "abort": false, 00:15:27.638 "seek_hole": false, 00:15:27.638 "seek_data": false, 00:15:27.638 "copy": false, 00:15:27.638 "nvme_iov_md": false 00:15:27.638 }, 00:15:27.638 "memory_domains": [ 00:15:27.638 { 00:15:27.638 "dma_device_id": "system", 00:15:27.638 "dma_device_type": 1 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.638 "dma_device_type": 2 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "dma_device_id": "system", 00:15:27.638 "dma_device_type": 1 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.638 "dma_device_type": 2 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "dma_device_id": "system", 00:15:27.638 "dma_device_type": 1 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.638 "dma_device_type": 2 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "dma_device_id": "system", 00:15:27.638 "dma_device_type": 1 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.638 "dma_device_type": 2 00:15:27.638 } 00:15:27.638 ], 00:15:27.638 "driver_specific": { 00:15:27.638 "raid": { 00:15:27.638 "uuid": "20df87f1-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.638 "strip_size_kb": 64, 00:15:27.638 "state": "online", 00:15:27.638 "raid_level": "concat", 00:15:27.638 "superblock": true, 00:15:27.638 "num_base_bdevs": 4, 00:15:27.638 "num_base_bdevs_discovered": 4, 00:15:27.638 "num_base_bdevs_operational": 4, 00:15:27.638 "base_bdevs_list": [ 00:15:27.638 { 00:15:27.638 "name": "NewBaseBdev", 00:15:27.638 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.638 "is_configured": true, 00:15:27.638 "data_offset": 2048, 00:15:27.638 "data_size": 63488 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "name": "BaseBdev2", 00:15:27.638 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.638 "is_configured": true, 00:15:27.638 "data_offset": 2048, 00:15:27.638 "data_size": 63488 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "name": "BaseBdev3", 00:15:27.638 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.638 "is_configured": true, 00:15:27.638 "data_offset": 2048, 00:15:27.638 "data_size": 63488 00:15:27.638 }, 00:15:27.638 { 00:15:27.638 "name": "BaseBdev4", 00:15:27.638 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.638 "is_configured": true, 00:15:27.638 "data_offset": 2048, 00:15:27.638 "data_size": 63488 00:15:27.638 } 00:15:27.638 ] 00:15:27.638 } 00:15:27.638 } 00:15:27.638 }' 00:15:27.638 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.638 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:27.638 BaseBdev2 00:15:27.638 BaseBdev3 00:15:27.638 BaseBdev4' 00:15:27.638 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:27.639 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:27.639 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.897 "name": "NewBaseBdev", 00:15:27.897 "aliases": [ 00:15:27.897 "2211f3d0-42d8-11ef-9ade-d5fc5159efa5" 00:15:27.897 ], 00:15:27.897 "product_name": "Malloc disk", 00:15:27.897 "block_size": 512, 00:15:27.897 "num_blocks": 65536, 00:15:27.897 "uuid": "2211f3d0-42d8-11ef-9ade-d5fc5159efa5", 00:15:27.897 "assigned_rate_limits": { 00:15:27.897 "rw_ios_per_sec": 0, 00:15:27.897 "rw_mbytes_per_sec": 0, 00:15:27.897 "r_mbytes_per_sec": 0, 00:15:27.897 "w_mbytes_per_sec": 0 00:15:27.897 }, 00:15:27.897 "claimed": true, 00:15:27.897 "claim_type": "exclusive_write", 00:15:27.897 "zoned": false, 00:15:27.897 "supported_io_types": { 00:15:27.897 "read": true, 00:15:27.897 "write": true, 00:15:27.897 "unmap": true, 00:15:27.897 "flush": true, 00:15:27.897 "reset": true, 00:15:27.897 "nvme_admin": false, 00:15:27.897 "nvme_io": false, 00:15:27.897 "nvme_io_md": false, 00:15:27.897 "write_zeroes": true, 00:15:27.897 "zcopy": true, 00:15:27.897 "get_zone_info": false, 00:15:27.897 "zone_management": false, 00:15:27.897 "zone_append": false, 00:15:27.897 "compare": false, 00:15:27.897 "compare_and_write": false, 00:15:27.897 "abort": true, 00:15:27.897 "seek_hole": false, 00:15:27.897 "seek_data": false, 00:15:27.897 "copy": true, 00:15:27.897 "nvme_iov_md": false 00:15:27.897 }, 00:15:27.897 "memory_domains": [ 00:15:27.897 { 00:15:27.897 "dma_device_id": "system", 00:15:27.897 "dma_device_type": 1 00:15:27.897 }, 00:15:27.897 { 00:15:27.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.897 "dma_device_type": 2 00:15:27.897 } 00:15:27.897 ], 00:15:27.897 "driver_specific": {} 00:15:27.897 }' 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.897 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.155 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.155 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:28.155 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:28.155 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:28.155 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:28.413 "name": "BaseBdev2", 00:15:28.413 "aliases": [ 00:15:28.413 "1f891a02-42d8-11ef-9ade-d5fc5159efa5" 00:15:28.413 ], 00:15:28.413 "product_name": "Malloc disk", 00:15:28.413 "block_size": 512, 00:15:28.413 "num_blocks": 65536, 00:15:28.413 "uuid": "1f891a02-42d8-11ef-9ade-d5fc5159efa5", 00:15:28.413 "assigned_rate_limits": { 00:15:28.413 "rw_ios_per_sec": 0, 00:15:28.413 "rw_mbytes_per_sec": 0, 00:15:28.413 "r_mbytes_per_sec": 0, 00:15:28.413 "w_mbytes_per_sec": 0 00:15:28.413 }, 00:15:28.413 "claimed": true, 00:15:28.413 "claim_type": "exclusive_write", 00:15:28.413 "zoned": false, 00:15:28.413 "supported_io_types": { 00:15:28.413 "read": true, 00:15:28.413 "write": true, 00:15:28.413 "unmap": true, 00:15:28.413 "flush": true, 00:15:28.413 "reset": true, 00:15:28.413 "nvme_admin": false, 00:15:28.413 "nvme_io": false, 00:15:28.413 "nvme_io_md": false, 00:15:28.413 "write_zeroes": true, 00:15:28.413 "zcopy": true, 00:15:28.413 "get_zone_info": false, 00:15:28.413 "zone_management": false, 00:15:28.413 "zone_append": false, 00:15:28.413 "compare": false, 00:15:28.413 "compare_and_write": false, 00:15:28.413 "abort": true, 00:15:28.413 "seek_hole": false, 00:15:28.413 "seek_data": false, 00:15:28.413 "copy": true, 00:15:28.413 "nvme_iov_md": false 00:15:28.413 }, 00:15:28.413 "memory_domains": [ 00:15:28.413 { 00:15:28.413 "dma_device_id": "system", 00:15:28.413 "dma_device_type": 1 00:15:28.413 }, 00:15:28.413 { 00:15:28.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.413 "dma_device_type": 2 00:15:28.413 } 00:15:28.413 ], 00:15:28.413 "driver_specific": {} 00:15:28.413 }' 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:28.413 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:28.671 "name": "BaseBdev3", 00:15:28.671 "aliases": [ 00:15:28.671 "1ff83149-42d8-11ef-9ade-d5fc5159efa5" 00:15:28.671 ], 00:15:28.671 "product_name": "Malloc disk", 00:15:28.671 "block_size": 512, 00:15:28.671 "num_blocks": 65536, 00:15:28.671 "uuid": "1ff83149-42d8-11ef-9ade-d5fc5159efa5", 00:15:28.671 "assigned_rate_limits": { 00:15:28.671 "rw_ios_per_sec": 0, 00:15:28.671 "rw_mbytes_per_sec": 0, 00:15:28.671 "r_mbytes_per_sec": 0, 00:15:28.671 "w_mbytes_per_sec": 0 00:15:28.671 }, 00:15:28.671 "claimed": true, 00:15:28.671 "claim_type": "exclusive_write", 00:15:28.671 "zoned": false, 00:15:28.671 "supported_io_types": { 00:15:28.671 "read": true, 00:15:28.671 "write": true, 00:15:28.671 "unmap": true, 00:15:28.671 "flush": true, 00:15:28.671 "reset": true, 00:15:28.671 "nvme_admin": false, 00:15:28.671 "nvme_io": false, 00:15:28.671 "nvme_io_md": false, 00:15:28.671 "write_zeroes": true, 00:15:28.671 "zcopy": true, 00:15:28.671 "get_zone_info": false, 00:15:28.671 "zone_management": false, 00:15:28.671 "zone_append": false, 00:15:28.671 "compare": false, 00:15:28.671 "compare_and_write": false, 00:15:28.671 "abort": true, 00:15:28.671 "seek_hole": false, 00:15:28.671 "seek_data": false, 00:15:28.671 "copy": true, 00:15:28.671 "nvme_iov_md": false 00:15:28.671 }, 00:15:28.671 "memory_domains": [ 00:15:28.671 { 00:15:28.671 "dma_device_id": "system", 00:15:28.671 "dma_device_type": 1 00:15:28.671 }, 00:15:28.671 { 00:15:28.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.671 "dma_device_type": 2 00:15:28.671 } 00:15:28.671 ], 00:15:28.671 "driver_specific": {} 00:15:28.671 }' 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:28.671 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:28.930 "name": "BaseBdev4", 00:15:28.930 "aliases": [ 00:15:28.930 "206cc743-42d8-11ef-9ade-d5fc5159efa5" 00:15:28.930 ], 00:15:28.930 "product_name": "Malloc disk", 00:15:28.930 "block_size": 512, 00:15:28.930 "num_blocks": 65536, 00:15:28.930 "uuid": "206cc743-42d8-11ef-9ade-d5fc5159efa5", 00:15:28.930 "assigned_rate_limits": { 00:15:28.930 "rw_ios_per_sec": 0, 00:15:28.930 "rw_mbytes_per_sec": 0, 00:15:28.930 "r_mbytes_per_sec": 0, 00:15:28.930 "w_mbytes_per_sec": 0 00:15:28.930 }, 00:15:28.930 "claimed": true, 00:15:28.930 "claim_type": "exclusive_write", 00:15:28.930 "zoned": false, 00:15:28.930 "supported_io_types": { 00:15:28.930 "read": true, 00:15:28.930 "write": true, 00:15:28.930 "unmap": true, 00:15:28.930 "flush": true, 00:15:28.930 "reset": true, 00:15:28.930 "nvme_admin": false, 00:15:28.930 "nvme_io": false, 00:15:28.930 "nvme_io_md": false, 00:15:28.930 "write_zeroes": true, 00:15:28.930 "zcopy": true, 00:15:28.930 "get_zone_info": false, 00:15:28.930 "zone_management": false, 00:15:28.930 "zone_append": false, 00:15:28.930 "compare": false, 00:15:28.930 "compare_and_write": false, 00:15:28.930 "abort": true, 00:15:28.930 "seek_hole": false, 00:15:28.930 "seek_data": false, 00:15:28.930 "copy": true, 00:15:28.930 "nvme_iov_md": false 00:15:28.930 }, 00:15:28.930 "memory_domains": [ 00:15:28.930 { 00:15:28.930 "dma_device_id": "system", 00:15:28.930 "dma_device_type": 1 00:15:28.930 }, 00:15:28.930 { 00:15:28.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.930 "dma_device_type": 2 00:15:28.930 } 00:15:28.930 ], 00:15:28.930 "driver_specific": {} 00:15:28.930 }' 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:28.930 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:29.214 [2024-07-15 18:29:21.537011] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.214 [2024-07-15 18:29:21.537039] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.214 [2024-07-15 18:29:21.537062] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.214 [2024-07-15 18:29:21.537079] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.215 [2024-07-15 18:29:21.537083] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x40c8234f00 name Existed_Raid, state offline 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 61558 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 61558 ']' 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 61558 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 61558 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61558' 00:15:29.215 killing process with pid 61558 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 61558 00:15:29.215 [2024-07-15 18:29:21.567514] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.215 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 61558 00:15:29.215 [2024-07-15 18:29:21.590943] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.474 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:29.474 00:15:29.474 real 0m27.167s 00:15:29.474 user 0m49.586s 00:15:29.474 sys 0m3.832s 00:15:29.474 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.474 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.474 ************************************ 00:15:29.474 END TEST raid_state_function_test_sb 00:15:29.474 ************************************ 00:15:29.474 18:29:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:29.474 18:29:21 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:29.474 18:29:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:29.474 18:29:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.474 18:29:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:29.474 ************************************ 00:15:29.474 START TEST raid_superblock_test 00:15:29.474 ************************************ 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=62376 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 62376 /var/tmp/spdk-raid.sock 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 62376 ']' 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.474 18:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.732 [2024-07-15 18:29:21.869082] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:29.733 [2024-07-15 18:29:21.869281] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:30.311 EAL: TSC is not safe to use in SMP mode 00:15:30.311 EAL: TSC is not invariant 00:15:30.311 [2024-07-15 18:29:22.459617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.311 [2024-07-15 18:29:22.570720] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:30.311 [2024-07-15 18:29:22.572914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.311 [2024-07-15 18:29:22.573700] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.311 [2024-07-15 18:29:22.573718] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.897 18:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:31.182 malloc1 00:15:31.182 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.441 [2024-07-15 18:29:23.573987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.441 [2024-07-15 18:29:23.574052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.441 [2024-07-15 18:29:23.574066] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e34780 00:15:31.441 [2024-07-15 18:29:23.574074] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.441 [2024-07-15 18:29:23.575087] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.441 [2024-07-15 18:29:23.575111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.441 pt1 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.441 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:31.441 malloc2 00:15:31.700 18:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.958 [2024-07-15 18:29:24.126026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.959 [2024-07-15 18:29:24.126087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.959 [2024-07-15 18:29:24.126100] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e34c80 00:15:31.959 [2024-07-15 18:29:24.126109] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.959 [2024-07-15 18:29:24.126910] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.959 [2024-07-15 18:29:24.126930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.959 pt2 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.959 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:32.217 malloc3 00:15:32.217 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:32.475 [2024-07-15 18:29:24.650062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:32.475 [2024-07-15 18:29:24.650120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.475 [2024-07-15 18:29:24.650132] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e35180 00:15:32.475 [2024-07-15 18:29:24.650141] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.475 [2024-07-15 18:29:24.650925] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.475 [2024-07-15 18:29:24.650950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:32.475 pt3 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:32.475 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:32.734 malloc4 00:15:32.734 18:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:32.992 [2024-07-15 18:29:25.142095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:32.992 [2024-07-15 18:29:25.142154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.992 [2024-07-15 18:29:25.142168] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e35680 00:15:32.992 [2024-07-15 18:29:25.142176] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.992 [2024-07-15 18:29:25.142975] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.992 [2024-07-15 18:29:25.142999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:32.992 pt4 00:15:32.992 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:32.992 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:32.992 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:32.992 [2024-07-15 18:29:25.382121] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:32.992 [2024-07-15 18:29:25.382798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:32.992 [2024-07-15 18:29:25.382821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:32.992 [2024-07-15 18:29:25.382834] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:32.992 [2024-07-15 18:29:25.382904] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ece05e35900 00:15:32.992 [2024-07-15 18:29:25.382910] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:32.992 [2024-07-15 18:29:25.382953] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ece05e97e20 00:15:32.992 [2024-07-15 18:29:25.383039] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ece05e35900 00:15:32.992 [2024-07-15 18:29:25.383044] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3ece05e35900 00:15:32.992 [2024-07-15 18:29:25.383072] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.261 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.523 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.523 "name": "raid_bdev1", 00:15:33.523 "uuid": "29d2beb1-42d8-11ef-9ade-d5fc5159efa5", 00:15:33.523 "strip_size_kb": 64, 00:15:33.523 "state": "online", 00:15:33.523 "raid_level": "concat", 00:15:33.523 "superblock": true, 00:15:33.523 "num_base_bdevs": 4, 00:15:33.523 "num_base_bdevs_discovered": 4, 00:15:33.523 "num_base_bdevs_operational": 4, 00:15:33.523 "base_bdevs_list": [ 00:15:33.523 { 00:15:33.523 "name": "pt1", 00:15:33.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.523 "is_configured": true, 00:15:33.523 "data_offset": 2048, 00:15:33.523 "data_size": 63488 00:15:33.523 }, 00:15:33.523 { 00:15:33.523 "name": "pt2", 00:15:33.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.523 "is_configured": true, 00:15:33.523 "data_offset": 2048, 00:15:33.523 "data_size": 63488 00:15:33.523 }, 00:15:33.523 { 00:15:33.523 "name": "pt3", 00:15:33.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.523 "is_configured": true, 00:15:33.523 "data_offset": 2048, 00:15:33.523 "data_size": 63488 00:15:33.523 }, 00:15:33.523 { 00:15:33.523 "name": "pt4", 00:15:33.523 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:33.523 "is_configured": true, 00:15:33.523 "data_offset": 2048, 00:15:33.523 "data_size": 63488 00:15:33.523 } 00:15:33.523 ] 00:15:33.523 }' 00:15:33.523 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.523 18:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.782 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:33.782 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:33.782 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:33.782 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:33.782 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:33.782 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:33.782 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:33.782 18:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:34.040 [2024-07-15 18:29:26.186218] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.040 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:34.040 "name": "raid_bdev1", 00:15:34.040 "aliases": [ 00:15:34.040 "29d2beb1-42d8-11ef-9ade-d5fc5159efa5" 00:15:34.040 ], 00:15:34.040 "product_name": "Raid Volume", 00:15:34.040 "block_size": 512, 00:15:34.040 "num_blocks": 253952, 00:15:34.040 "uuid": "29d2beb1-42d8-11ef-9ade-d5fc5159efa5", 00:15:34.040 "assigned_rate_limits": { 00:15:34.040 "rw_ios_per_sec": 0, 00:15:34.040 "rw_mbytes_per_sec": 0, 00:15:34.040 "r_mbytes_per_sec": 0, 00:15:34.040 "w_mbytes_per_sec": 0 00:15:34.040 }, 00:15:34.040 "claimed": false, 00:15:34.040 "zoned": false, 00:15:34.040 "supported_io_types": { 00:15:34.040 "read": true, 00:15:34.040 "write": true, 00:15:34.040 "unmap": true, 00:15:34.040 "flush": true, 00:15:34.040 "reset": true, 00:15:34.040 "nvme_admin": false, 00:15:34.040 "nvme_io": false, 00:15:34.040 "nvme_io_md": false, 00:15:34.040 "write_zeroes": true, 00:15:34.040 "zcopy": false, 00:15:34.040 "get_zone_info": false, 00:15:34.040 "zone_management": false, 00:15:34.040 "zone_append": false, 00:15:34.040 "compare": false, 00:15:34.040 "compare_and_write": false, 00:15:34.040 "abort": false, 00:15:34.040 "seek_hole": false, 00:15:34.040 "seek_data": false, 00:15:34.040 "copy": false, 00:15:34.040 "nvme_iov_md": false 00:15:34.040 }, 00:15:34.040 "memory_domains": [ 00:15:34.040 { 00:15:34.040 "dma_device_id": "system", 00:15:34.040 "dma_device_type": 1 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.040 "dma_device_type": 2 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "dma_device_id": "system", 00:15:34.040 "dma_device_type": 1 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.040 "dma_device_type": 2 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "dma_device_id": "system", 00:15:34.040 "dma_device_type": 1 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.040 "dma_device_type": 2 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "dma_device_id": "system", 00:15:34.040 "dma_device_type": 1 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.040 "dma_device_type": 2 00:15:34.040 } 00:15:34.040 ], 00:15:34.040 "driver_specific": { 00:15:34.040 "raid": { 00:15:34.040 "uuid": "29d2beb1-42d8-11ef-9ade-d5fc5159efa5", 00:15:34.040 "strip_size_kb": 64, 00:15:34.040 "state": "online", 00:15:34.040 "raid_level": "concat", 00:15:34.040 "superblock": true, 00:15:34.040 "num_base_bdevs": 4, 00:15:34.040 "num_base_bdevs_discovered": 4, 00:15:34.040 "num_base_bdevs_operational": 4, 00:15:34.040 "base_bdevs_list": [ 00:15:34.040 { 00:15:34.040 "name": "pt1", 00:15:34.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.040 "is_configured": true, 00:15:34.040 "data_offset": 2048, 00:15:34.040 "data_size": 63488 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "name": "pt2", 00:15:34.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.040 "is_configured": true, 00:15:34.040 "data_offset": 2048, 00:15:34.040 "data_size": 63488 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "name": "pt3", 00:15:34.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.040 "is_configured": true, 00:15:34.040 "data_offset": 2048, 00:15:34.040 "data_size": 63488 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "name": "pt4", 00:15:34.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:34.040 "is_configured": true, 00:15:34.040 "data_offset": 2048, 00:15:34.040 "data_size": 63488 00:15:34.040 } 00:15:34.040 ] 00:15:34.040 } 00:15:34.040 } 00:15:34.040 }' 00:15:34.040 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.040 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:34.040 pt2 00:15:34.040 pt3 00:15:34.040 pt4' 00:15:34.040 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.040 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:34.040 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.298 "name": "pt1", 00:15:34.298 "aliases": [ 00:15:34.298 "00000000-0000-0000-0000-000000000001" 00:15:34.298 ], 00:15:34.298 "product_name": "passthru", 00:15:34.298 "block_size": 512, 00:15:34.298 "num_blocks": 65536, 00:15:34.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.298 "assigned_rate_limits": { 00:15:34.298 "rw_ios_per_sec": 0, 00:15:34.298 "rw_mbytes_per_sec": 0, 00:15:34.298 "r_mbytes_per_sec": 0, 00:15:34.298 "w_mbytes_per_sec": 0 00:15:34.298 }, 00:15:34.298 "claimed": true, 00:15:34.298 "claim_type": "exclusive_write", 00:15:34.298 "zoned": false, 00:15:34.298 "supported_io_types": { 00:15:34.298 "read": true, 00:15:34.298 "write": true, 00:15:34.298 "unmap": true, 00:15:34.298 "flush": true, 00:15:34.298 "reset": true, 00:15:34.298 "nvme_admin": false, 00:15:34.298 "nvme_io": false, 00:15:34.298 "nvme_io_md": false, 00:15:34.298 "write_zeroes": true, 00:15:34.298 "zcopy": true, 00:15:34.298 "get_zone_info": false, 00:15:34.298 "zone_management": false, 00:15:34.298 "zone_append": false, 00:15:34.298 "compare": false, 00:15:34.298 "compare_and_write": false, 00:15:34.298 "abort": true, 00:15:34.298 "seek_hole": false, 00:15:34.298 "seek_data": false, 00:15:34.298 "copy": true, 00:15:34.298 "nvme_iov_md": false 00:15:34.298 }, 00:15:34.298 "memory_domains": [ 00:15:34.298 { 00:15:34.298 "dma_device_id": "system", 00:15:34.298 "dma_device_type": 1 00:15:34.298 }, 00:15:34.298 { 00:15:34.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.298 "dma_device_type": 2 00:15:34.298 } 00:15:34.298 ], 00:15:34.298 "driver_specific": { 00:15:34.298 "passthru": { 00:15:34.298 "name": "pt1", 00:15:34.298 "base_bdev_name": "malloc1" 00:15:34.298 } 00:15:34.298 } 00:15:34.298 }' 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:34.298 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.566 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.567 "name": "pt2", 00:15:34.567 "aliases": [ 00:15:34.567 "00000000-0000-0000-0000-000000000002" 00:15:34.567 ], 00:15:34.567 "product_name": "passthru", 00:15:34.567 "block_size": 512, 00:15:34.567 "num_blocks": 65536, 00:15:34.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.567 "assigned_rate_limits": { 00:15:34.567 "rw_ios_per_sec": 0, 00:15:34.567 "rw_mbytes_per_sec": 0, 00:15:34.567 "r_mbytes_per_sec": 0, 00:15:34.567 "w_mbytes_per_sec": 0 00:15:34.567 }, 00:15:34.567 "claimed": true, 00:15:34.567 "claim_type": "exclusive_write", 00:15:34.567 "zoned": false, 00:15:34.567 "supported_io_types": { 00:15:34.567 "read": true, 00:15:34.567 "write": true, 00:15:34.567 "unmap": true, 00:15:34.567 "flush": true, 00:15:34.567 "reset": true, 00:15:34.567 "nvme_admin": false, 00:15:34.567 "nvme_io": false, 00:15:34.567 "nvme_io_md": false, 00:15:34.567 "write_zeroes": true, 00:15:34.567 "zcopy": true, 00:15:34.567 "get_zone_info": false, 00:15:34.567 "zone_management": false, 00:15:34.567 "zone_append": false, 00:15:34.567 "compare": false, 00:15:34.567 "compare_and_write": false, 00:15:34.567 "abort": true, 00:15:34.567 "seek_hole": false, 00:15:34.567 "seek_data": false, 00:15:34.567 "copy": true, 00:15:34.567 "nvme_iov_md": false 00:15:34.567 }, 00:15:34.567 "memory_domains": [ 00:15:34.567 { 00:15:34.567 "dma_device_id": "system", 00:15:34.567 "dma_device_type": 1 00:15:34.567 }, 00:15:34.567 { 00:15:34.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.567 "dma_device_type": 2 00:15:34.567 } 00:15:34.567 ], 00:15:34.567 "driver_specific": { 00:15:34.567 "passthru": { 00:15:34.567 "name": "pt2", 00:15:34.567 "base_bdev_name": "malloc2" 00:15:34.567 } 00:15:34.567 } 00:15:34.567 }' 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:34.567 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.830 "name": "pt3", 00:15:34.830 "aliases": [ 00:15:34.830 "00000000-0000-0000-0000-000000000003" 00:15:34.830 ], 00:15:34.830 "product_name": "passthru", 00:15:34.830 "block_size": 512, 00:15:34.830 "num_blocks": 65536, 00:15:34.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.830 "assigned_rate_limits": { 00:15:34.830 "rw_ios_per_sec": 0, 00:15:34.830 "rw_mbytes_per_sec": 0, 00:15:34.830 "r_mbytes_per_sec": 0, 00:15:34.830 "w_mbytes_per_sec": 0 00:15:34.830 }, 00:15:34.830 "claimed": true, 00:15:34.830 "claim_type": "exclusive_write", 00:15:34.830 "zoned": false, 00:15:34.830 "supported_io_types": { 00:15:34.830 "read": true, 00:15:34.830 "write": true, 00:15:34.830 "unmap": true, 00:15:34.830 "flush": true, 00:15:34.830 "reset": true, 00:15:34.830 "nvme_admin": false, 00:15:34.830 "nvme_io": false, 00:15:34.830 "nvme_io_md": false, 00:15:34.830 "write_zeroes": true, 00:15:34.830 "zcopy": true, 00:15:34.830 "get_zone_info": false, 00:15:34.830 "zone_management": false, 00:15:34.830 "zone_append": false, 00:15:34.830 "compare": false, 00:15:34.830 "compare_and_write": false, 00:15:34.830 "abort": true, 00:15:34.830 "seek_hole": false, 00:15:34.830 "seek_data": false, 00:15:34.830 "copy": true, 00:15:34.830 "nvme_iov_md": false 00:15:34.830 }, 00:15:34.830 "memory_domains": [ 00:15:34.830 { 00:15:34.830 "dma_device_id": "system", 00:15:34.830 "dma_device_type": 1 00:15:34.830 }, 00:15:34.830 { 00:15:34.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.830 "dma_device_type": 2 00:15:34.830 } 00:15:34.830 ], 00:15:34.830 "driver_specific": { 00:15:34.830 "passthru": { 00:15:34.830 "name": "pt3", 00:15:34.830 "base_bdev_name": "malloc3" 00:15:34.830 } 00:15:34.830 } 00:15:34.830 }' 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:34.830 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.087 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.087 "name": "pt4", 00:15:35.087 "aliases": [ 00:15:35.087 "00000000-0000-0000-0000-000000000004" 00:15:35.087 ], 00:15:35.087 "product_name": "passthru", 00:15:35.087 "block_size": 512, 00:15:35.087 "num_blocks": 65536, 00:15:35.087 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:35.087 "assigned_rate_limits": { 00:15:35.087 "rw_ios_per_sec": 0, 00:15:35.087 "rw_mbytes_per_sec": 0, 00:15:35.087 "r_mbytes_per_sec": 0, 00:15:35.087 "w_mbytes_per_sec": 0 00:15:35.087 }, 00:15:35.087 "claimed": true, 00:15:35.087 "claim_type": "exclusive_write", 00:15:35.087 "zoned": false, 00:15:35.087 "supported_io_types": { 00:15:35.087 "read": true, 00:15:35.087 "write": true, 00:15:35.087 "unmap": true, 00:15:35.087 "flush": true, 00:15:35.087 "reset": true, 00:15:35.087 "nvme_admin": false, 00:15:35.087 "nvme_io": false, 00:15:35.087 "nvme_io_md": false, 00:15:35.087 "write_zeroes": true, 00:15:35.087 "zcopy": true, 00:15:35.087 "get_zone_info": false, 00:15:35.087 "zone_management": false, 00:15:35.087 "zone_append": false, 00:15:35.087 "compare": false, 00:15:35.087 "compare_and_write": false, 00:15:35.087 "abort": true, 00:15:35.087 "seek_hole": false, 00:15:35.087 "seek_data": false, 00:15:35.087 "copy": true, 00:15:35.087 "nvme_iov_md": false 00:15:35.087 }, 00:15:35.087 "memory_domains": [ 00:15:35.087 { 00:15:35.087 "dma_device_id": "system", 00:15:35.087 "dma_device_type": 1 00:15:35.087 }, 00:15:35.087 { 00:15:35.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.087 "dma_device_type": 2 00:15:35.087 } 00:15:35.087 ], 00:15:35.087 "driver_specific": { 00:15:35.087 "passthru": { 00:15:35.087 "name": "pt4", 00:15:35.087 "base_bdev_name": "malloc4" 00:15:35.087 } 00:15:35.087 } 00:15:35.087 }' 00:15:35.087 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.087 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.087 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.087 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:35.345 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:35.603 [2024-07-15 18:29:27.754332] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.603 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=29d2beb1-42d8-11ef-9ade-d5fc5159efa5 00:15:35.603 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 29d2beb1-42d8-11ef-9ade-d5fc5159efa5 ']' 00:15:35.603 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:35.860 [2024-07-15 18:29:28.086307] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.860 [2024-07-15 18:29:28.086330] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.860 [2024-07-15 18:29:28.086356] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.860 [2024-07-15 18:29:28.086375] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.860 [2024-07-15 18:29:28.086380] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ece05e35900 name raid_bdev1, state offline 00:15:35.860 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.860 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:36.117 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:36.117 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:36.117 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:36.117 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:36.434 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:36.434 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:36.691 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:36.691 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:36.949 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:36.949 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:37.206 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:37.206 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:37.464 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:37.722 [2024-07-15 18:29:30.026461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:37.722 [2024-07-15 18:29:30.027154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:37.722 [2024-07-15 18:29:30.027175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:37.722 [2024-07-15 18:29:30.027185] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:37.722 [2024-07-15 18:29:30.027200] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:37.722 [2024-07-15 18:29:30.027236] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:37.722 [2024-07-15 18:29:30.027248] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:37.722 [2024-07-15 18:29:30.027257] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:37.722 [2024-07-15 18:29:30.027266] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.722 [2024-07-15 18:29:30.027270] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ece05e35680 name raid_bdev1, state configuring 00:15:37.722 request: 00:15:37.722 { 00:15:37.722 "name": "raid_bdev1", 00:15:37.722 "raid_level": "concat", 00:15:37.722 "base_bdevs": [ 00:15:37.722 "malloc1", 00:15:37.722 "malloc2", 00:15:37.722 "malloc3", 00:15:37.722 "malloc4" 00:15:37.722 ], 00:15:37.722 "strip_size_kb": 64, 00:15:37.722 "superblock": false, 00:15:37.722 "method": "bdev_raid_create", 00:15:37.722 "req_id": 1 00:15:37.722 } 00:15:37.722 Got JSON-RPC error response 00:15:37.722 response: 00:15:37.722 { 00:15:37.722 "code": -17, 00:15:37.722 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:37.722 } 00:15:37.722 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:37.722 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:37.722 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:37.722 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:37.722 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.722 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:37.981 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:37.981 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:37.981 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:38.238 [2024-07-15 18:29:30.550485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:38.238 [2024-07-15 18:29:30.550561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.238 [2024-07-15 18:29:30.550574] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e35180 00:15:38.238 [2024-07-15 18:29:30.550583] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.238 [2024-07-15 18:29:30.551370] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.238 [2024-07-15 18:29:30.551398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:38.238 [2024-07-15 18:29:30.551426] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:38.238 [2024-07-15 18:29:30.551438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:38.238 pt1 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.238 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.496 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.496 "name": "raid_bdev1", 00:15:38.496 "uuid": "29d2beb1-42d8-11ef-9ade-d5fc5159efa5", 00:15:38.496 "strip_size_kb": 64, 00:15:38.496 "state": "configuring", 00:15:38.496 "raid_level": "concat", 00:15:38.496 "superblock": true, 00:15:38.496 "num_base_bdevs": 4, 00:15:38.496 "num_base_bdevs_discovered": 1, 00:15:38.496 "num_base_bdevs_operational": 4, 00:15:38.496 "base_bdevs_list": [ 00:15:38.496 { 00:15:38.496 "name": "pt1", 00:15:38.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.496 "is_configured": true, 00:15:38.496 "data_offset": 2048, 00:15:38.496 "data_size": 63488 00:15:38.496 }, 00:15:38.496 { 00:15:38.496 "name": null, 00:15:38.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.496 "is_configured": false, 00:15:38.496 "data_offset": 2048, 00:15:38.496 "data_size": 63488 00:15:38.496 }, 00:15:38.496 { 00:15:38.496 "name": null, 00:15:38.496 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.496 "is_configured": false, 00:15:38.496 "data_offset": 2048, 00:15:38.496 "data_size": 63488 00:15:38.496 }, 00:15:38.496 { 00:15:38.496 "name": null, 00:15:38.496 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.496 "is_configured": false, 00:15:38.496 "data_offset": 2048, 00:15:38.496 "data_size": 63488 00:15:38.496 } 00:15:38.496 ] 00:15:38.496 }' 00:15:38.496 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.496 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.062 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:39.062 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:39.062 [2024-07-15 18:29:31.374539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:39.062 [2024-07-15 18:29:31.374603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.062 [2024-07-15 18:29:31.374616] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e34780 00:15:39.062 [2024-07-15 18:29:31.374634] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.062 [2024-07-15 18:29:31.374776] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.062 [2024-07-15 18:29:31.374787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:39.062 [2024-07-15 18:29:31.374812] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:39.062 [2024-07-15 18:29:31.374821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:39.062 pt2 00:15:39.062 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:39.321 [2024-07-15 18:29:31.614562] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.321 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.580 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.580 "name": "raid_bdev1", 00:15:39.580 "uuid": "29d2beb1-42d8-11ef-9ade-d5fc5159efa5", 00:15:39.580 "strip_size_kb": 64, 00:15:39.580 "state": "configuring", 00:15:39.580 "raid_level": "concat", 00:15:39.580 "superblock": true, 00:15:39.580 "num_base_bdevs": 4, 00:15:39.580 "num_base_bdevs_discovered": 1, 00:15:39.580 "num_base_bdevs_operational": 4, 00:15:39.580 "base_bdevs_list": [ 00:15:39.580 { 00:15:39.580 "name": "pt1", 00:15:39.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.580 "is_configured": true, 00:15:39.580 "data_offset": 2048, 00:15:39.580 "data_size": 63488 00:15:39.580 }, 00:15:39.580 { 00:15:39.580 "name": null, 00:15:39.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.580 "is_configured": false, 00:15:39.580 "data_offset": 2048, 00:15:39.580 "data_size": 63488 00:15:39.580 }, 00:15:39.580 { 00:15:39.580 "name": null, 00:15:39.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.580 "is_configured": false, 00:15:39.580 "data_offset": 2048, 00:15:39.580 "data_size": 63488 00:15:39.580 }, 00:15:39.580 { 00:15:39.580 "name": null, 00:15:39.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.580 "is_configured": false, 00:15:39.580 "data_offset": 2048, 00:15:39.580 "data_size": 63488 00:15:39.580 } 00:15:39.580 ] 00:15:39.580 }' 00:15:39.580 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.580 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.838 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:39.838 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:39.838 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.097 [2024-07-15 18:29:32.446620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.097 [2024-07-15 18:29:32.446683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.097 [2024-07-15 18:29:32.446695] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e34780 00:15:40.097 [2024-07-15 18:29:32.446704] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.097 [2024-07-15 18:29:32.446835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.097 [2024-07-15 18:29:32.446854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.097 [2024-07-15 18:29:32.446880] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:40.097 [2024-07-15 18:29:32.446889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.097 pt2 00:15:40.097 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:40.097 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:40.097 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:40.356 [2024-07-15 18:29:32.678632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:40.356 [2024-07-15 18:29:32.678683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.356 [2024-07-15 18:29:32.678695] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e35b80 00:15:40.356 [2024-07-15 18:29:32.678704] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.356 [2024-07-15 18:29:32.678834] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.356 [2024-07-15 18:29:32.678852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:40.356 [2024-07-15 18:29:32.678877] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:40.356 [2024-07-15 18:29:32.678886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:40.356 pt3 00:15:40.356 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:40.356 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:40.356 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:40.625 [2024-07-15 18:29:32.938662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:40.625 [2024-07-15 18:29:32.938719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.625 [2024-07-15 18:29:32.938732] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ece05e35900 00:15:40.625 [2024-07-15 18:29:32.938740] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.625 [2024-07-15 18:29:32.938870] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.625 [2024-07-15 18:29:32.938881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:40.625 [2024-07-15 18:29:32.938906] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:40.625 [2024-07-15 18:29:32.938915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:40.625 [2024-07-15 18:29:32.938948] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ece05e34c80 00:15:40.625 [2024-07-15 18:29:32.938952] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:40.625 [2024-07-15 18:29:32.938974] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ece05e97e20 00:15:40.625 [2024-07-15 18:29:32.939028] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ece05e34c80 00:15:40.625 [2024-07-15 18:29:32.939034] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3ece05e34c80 00:15:40.625 [2024-07-15 18:29:32.939056] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.625 pt4 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.625 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.912 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.912 "name": "raid_bdev1", 00:15:40.912 "uuid": "29d2beb1-42d8-11ef-9ade-d5fc5159efa5", 00:15:40.912 "strip_size_kb": 64, 00:15:40.912 "state": "online", 00:15:40.912 "raid_level": "concat", 00:15:40.912 "superblock": true, 00:15:40.912 "num_base_bdevs": 4, 00:15:40.912 "num_base_bdevs_discovered": 4, 00:15:40.912 "num_base_bdevs_operational": 4, 00:15:40.912 "base_bdevs_list": [ 00:15:40.912 { 00:15:40.912 "name": "pt1", 00:15:40.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.912 "is_configured": true, 00:15:40.912 "data_offset": 2048, 00:15:40.912 "data_size": 63488 00:15:40.912 }, 00:15:40.912 { 00:15:40.912 "name": "pt2", 00:15:40.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.912 "is_configured": true, 00:15:40.912 "data_offset": 2048, 00:15:40.912 "data_size": 63488 00:15:40.912 }, 00:15:40.912 { 00:15:40.912 "name": "pt3", 00:15:40.912 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.912 "is_configured": true, 00:15:40.912 "data_offset": 2048, 00:15:40.912 "data_size": 63488 00:15:40.912 }, 00:15:40.912 { 00:15:40.912 "name": "pt4", 00:15:40.912 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.912 "is_configured": true, 00:15:40.912 "data_offset": 2048, 00:15:40.912 "data_size": 63488 00:15:40.912 } 00:15:40.912 ] 00:15:40.912 }' 00:15:40.912 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.912 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:41.478 [2024-07-15 18:29:33.810771] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:41.478 "name": "raid_bdev1", 00:15:41.478 "aliases": [ 00:15:41.478 "29d2beb1-42d8-11ef-9ade-d5fc5159efa5" 00:15:41.478 ], 00:15:41.478 "product_name": "Raid Volume", 00:15:41.478 "block_size": 512, 00:15:41.478 "num_blocks": 253952, 00:15:41.478 "uuid": "29d2beb1-42d8-11ef-9ade-d5fc5159efa5", 00:15:41.478 "assigned_rate_limits": { 00:15:41.478 "rw_ios_per_sec": 0, 00:15:41.478 "rw_mbytes_per_sec": 0, 00:15:41.478 "r_mbytes_per_sec": 0, 00:15:41.478 "w_mbytes_per_sec": 0 00:15:41.478 }, 00:15:41.478 "claimed": false, 00:15:41.478 "zoned": false, 00:15:41.478 "supported_io_types": { 00:15:41.478 "read": true, 00:15:41.478 "write": true, 00:15:41.478 "unmap": true, 00:15:41.478 "flush": true, 00:15:41.478 "reset": true, 00:15:41.478 "nvme_admin": false, 00:15:41.478 "nvme_io": false, 00:15:41.478 "nvme_io_md": false, 00:15:41.478 "write_zeroes": true, 00:15:41.478 "zcopy": false, 00:15:41.478 "get_zone_info": false, 00:15:41.478 "zone_management": false, 00:15:41.478 "zone_append": false, 00:15:41.478 "compare": false, 00:15:41.478 "compare_and_write": false, 00:15:41.478 "abort": false, 00:15:41.478 "seek_hole": false, 00:15:41.478 "seek_data": false, 00:15:41.478 "copy": false, 00:15:41.478 "nvme_iov_md": false 00:15:41.478 }, 00:15:41.478 "memory_domains": [ 00:15:41.478 { 00:15:41.478 "dma_device_id": "system", 00:15:41.478 "dma_device_type": 1 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.478 "dma_device_type": 2 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "dma_device_id": "system", 00:15:41.478 "dma_device_type": 1 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.478 "dma_device_type": 2 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "dma_device_id": "system", 00:15:41.478 "dma_device_type": 1 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.478 "dma_device_type": 2 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "dma_device_id": "system", 00:15:41.478 "dma_device_type": 1 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.478 "dma_device_type": 2 00:15:41.478 } 00:15:41.478 ], 00:15:41.478 "driver_specific": { 00:15:41.478 "raid": { 00:15:41.478 "uuid": "29d2beb1-42d8-11ef-9ade-d5fc5159efa5", 00:15:41.478 "strip_size_kb": 64, 00:15:41.478 "state": "online", 00:15:41.478 "raid_level": "concat", 00:15:41.478 "superblock": true, 00:15:41.478 "num_base_bdevs": 4, 00:15:41.478 "num_base_bdevs_discovered": 4, 00:15:41.478 "num_base_bdevs_operational": 4, 00:15:41.478 "base_bdevs_list": [ 00:15:41.478 { 00:15:41.478 "name": "pt1", 00:15:41.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.478 "is_configured": true, 00:15:41.478 "data_offset": 2048, 00:15:41.478 "data_size": 63488 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "name": "pt2", 00:15:41.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.478 "is_configured": true, 00:15:41.478 "data_offset": 2048, 00:15:41.478 "data_size": 63488 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "name": "pt3", 00:15:41.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.478 "is_configured": true, 00:15:41.478 "data_offset": 2048, 00:15:41.478 "data_size": 63488 00:15:41.478 }, 00:15:41.478 { 00:15:41.478 "name": "pt4", 00:15:41.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.478 "is_configured": true, 00:15:41.478 "data_offset": 2048, 00:15:41.478 "data_size": 63488 00:15:41.478 } 00:15:41.478 ] 00:15:41.478 } 00:15:41.478 } 00:15:41.478 }' 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:41.478 pt2 00:15:41.478 pt3 00:15:41.478 pt4' 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:41.478 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.736 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.736 "name": "pt1", 00:15:41.736 "aliases": [ 00:15:41.736 "00000000-0000-0000-0000-000000000001" 00:15:41.736 ], 00:15:41.736 "product_name": "passthru", 00:15:41.736 "block_size": 512, 00:15:41.736 "num_blocks": 65536, 00:15:41.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.736 "assigned_rate_limits": { 00:15:41.736 "rw_ios_per_sec": 0, 00:15:41.736 "rw_mbytes_per_sec": 0, 00:15:41.736 "r_mbytes_per_sec": 0, 00:15:41.736 "w_mbytes_per_sec": 0 00:15:41.736 }, 00:15:41.736 "claimed": true, 00:15:41.736 "claim_type": "exclusive_write", 00:15:41.736 "zoned": false, 00:15:41.736 "supported_io_types": { 00:15:41.736 "read": true, 00:15:41.736 "write": true, 00:15:41.736 "unmap": true, 00:15:41.736 "flush": true, 00:15:41.737 "reset": true, 00:15:41.737 "nvme_admin": false, 00:15:41.737 "nvme_io": false, 00:15:41.737 "nvme_io_md": false, 00:15:41.737 "write_zeroes": true, 00:15:41.737 "zcopy": true, 00:15:41.737 "get_zone_info": false, 00:15:41.737 "zone_management": false, 00:15:41.737 "zone_append": false, 00:15:41.737 "compare": false, 00:15:41.737 "compare_and_write": false, 00:15:41.737 "abort": true, 00:15:41.737 "seek_hole": false, 00:15:41.737 "seek_data": false, 00:15:41.737 "copy": true, 00:15:41.737 "nvme_iov_md": false 00:15:41.737 }, 00:15:41.737 "memory_domains": [ 00:15:41.737 { 00:15:41.737 "dma_device_id": "system", 00:15:41.737 "dma_device_type": 1 00:15:41.737 }, 00:15:41.737 { 00:15:41.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.737 "dma_device_type": 2 00:15:41.737 } 00:15:41.737 ], 00:15:41.737 "driver_specific": { 00:15:41.737 "passthru": { 00:15:41.737 "name": "pt1", 00:15:41.737 "base_bdev_name": "malloc1" 00:15:41.737 } 00:15:41.737 } 00:15:41.737 }' 00:15:41.737 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.737 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.737 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.737 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:41.995 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.254 "name": "pt2", 00:15:42.254 "aliases": [ 00:15:42.254 "00000000-0000-0000-0000-000000000002" 00:15:42.254 ], 00:15:42.254 "product_name": "passthru", 00:15:42.254 "block_size": 512, 00:15:42.254 "num_blocks": 65536, 00:15:42.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.254 "assigned_rate_limits": { 00:15:42.254 "rw_ios_per_sec": 0, 00:15:42.254 "rw_mbytes_per_sec": 0, 00:15:42.254 "r_mbytes_per_sec": 0, 00:15:42.254 "w_mbytes_per_sec": 0 00:15:42.254 }, 00:15:42.254 "claimed": true, 00:15:42.254 "claim_type": "exclusive_write", 00:15:42.254 "zoned": false, 00:15:42.254 "supported_io_types": { 00:15:42.254 "read": true, 00:15:42.254 "write": true, 00:15:42.254 "unmap": true, 00:15:42.254 "flush": true, 00:15:42.254 "reset": true, 00:15:42.254 "nvme_admin": false, 00:15:42.254 "nvme_io": false, 00:15:42.254 "nvme_io_md": false, 00:15:42.254 "write_zeroes": true, 00:15:42.254 "zcopy": true, 00:15:42.254 "get_zone_info": false, 00:15:42.254 "zone_management": false, 00:15:42.254 "zone_append": false, 00:15:42.254 "compare": false, 00:15:42.254 "compare_and_write": false, 00:15:42.254 "abort": true, 00:15:42.254 "seek_hole": false, 00:15:42.254 "seek_data": false, 00:15:42.254 "copy": true, 00:15:42.254 "nvme_iov_md": false 00:15:42.254 }, 00:15:42.254 "memory_domains": [ 00:15:42.254 { 00:15:42.254 "dma_device_id": "system", 00:15:42.254 "dma_device_type": 1 00:15:42.254 }, 00:15:42.254 { 00:15:42.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.254 "dma_device_type": 2 00:15:42.254 } 00:15:42.254 ], 00:15:42.254 "driver_specific": { 00:15:42.254 "passthru": { 00:15:42.254 "name": "pt2", 00:15:42.254 "base_bdev_name": "malloc2" 00:15:42.254 } 00:15:42.254 } 00:15:42.254 }' 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:42.254 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.513 "name": "pt3", 00:15:42.513 "aliases": [ 00:15:42.513 "00000000-0000-0000-0000-000000000003" 00:15:42.513 ], 00:15:42.513 "product_name": "passthru", 00:15:42.513 "block_size": 512, 00:15:42.513 "num_blocks": 65536, 00:15:42.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.513 "assigned_rate_limits": { 00:15:42.513 "rw_ios_per_sec": 0, 00:15:42.513 "rw_mbytes_per_sec": 0, 00:15:42.513 "r_mbytes_per_sec": 0, 00:15:42.513 "w_mbytes_per_sec": 0 00:15:42.513 }, 00:15:42.513 "claimed": true, 00:15:42.513 "claim_type": "exclusive_write", 00:15:42.513 "zoned": false, 00:15:42.513 "supported_io_types": { 00:15:42.513 "read": true, 00:15:42.513 "write": true, 00:15:42.513 "unmap": true, 00:15:42.513 "flush": true, 00:15:42.513 "reset": true, 00:15:42.513 "nvme_admin": false, 00:15:42.513 "nvme_io": false, 00:15:42.513 "nvme_io_md": false, 00:15:42.513 "write_zeroes": true, 00:15:42.513 "zcopy": true, 00:15:42.513 "get_zone_info": false, 00:15:42.513 "zone_management": false, 00:15:42.513 "zone_append": false, 00:15:42.513 "compare": false, 00:15:42.513 "compare_and_write": false, 00:15:42.513 "abort": true, 00:15:42.513 "seek_hole": false, 00:15:42.513 "seek_data": false, 00:15:42.513 "copy": true, 00:15:42.513 "nvme_iov_md": false 00:15:42.513 }, 00:15:42.513 "memory_domains": [ 00:15:42.513 { 00:15:42.513 "dma_device_id": "system", 00:15:42.513 "dma_device_type": 1 00:15:42.513 }, 00:15:42.513 { 00:15:42.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.513 "dma_device_type": 2 00:15:42.513 } 00:15:42.513 ], 00:15:42.513 "driver_specific": { 00:15:42.513 "passthru": { 00:15:42.513 "name": "pt3", 00:15:42.513 "base_bdev_name": "malloc3" 00:15:42.513 } 00:15:42.513 } 00:15:42.513 }' 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:42.513 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.770 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.770 "name": "pt4", 00:15:42.770 "aliases": [ 00:15:42.770 "00000000-0000-0000-0000-000000000004" 00:15:42.770 ], 00:15:42.770 "product_name": "passthru", 00:15:42.770 "block_size": 512, 00:15:42.770 "num_blocks": 65536, 00:15:42.770 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.770 "assigned_rate_limits": { 00:15:42.770 "rw_ios_per_sec": 0, 00:15:42.770 "rw_mbytes_per_sec": 0, 00:15:42.770 "r_mbytes_per_sec": 0, 00:15:42.770 "w_mbytes_per_sec": 0 00:15:42.770 }, 00:15:42.770 "claimed": true, 00:15:42.770 "claim_type": "exclusive_write", 00:15:42.770 "zoned": false, 00:15:42.770 "supported_io_types": { 00:15:42.770 "read": true, 00:15:42.770 "write": true, 00:15:42.770 "unmap": true, 00:15:42.770 "flush": true, 00:15:42.770 "reset": true, 00:15:42.770 "nvme_admin": false, 00:15:42.770 "nvme_io": false, 00:15:42.770 "nvme_io_md": false, 00:15:42.770 "write_zeroes": true, 00:15:42.770 "zcopy": true, 00:15:42.770 "get_zone_info": false, 00:15:42.770 "zone_management": false, 00:15:42.770 "zone_append": false, 00:15:42.770 "compare": false, 00:15:42.770 "compare_and_write": false, 00:15:42.770 "abort": true, 00:15:42.770 "seek_hole": false, 00:15:42.770 "seek_data": false, 00:15:42.770 "copy": true, 00:15:42.770 "nvme_iov_md": false 00:15:42.770 }, 00:15:42.770 "memory_domains": [ 00:15:42.770 { 00:15:42.770 "dma_device_id": "system", 00:15:42.770 "dma_device_type": 1 00:15:42.770 }, 00:15:42.770 { 00:15:42.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.770 "dma_device_type": 2 00:15:42.770 } 00:15:42.770 ], 00:15:42.770 "driver_specific": { 00:15:42.770 "passthru": { 00:15:42.770 "name": "pt4", 00:15:42.770 "base_bdev_name": "malloc4" 00:15:42.770 } 00:15:42.770 } 00:15:42.770 }' 00:15:42.770 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.770 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.770 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.770 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.770 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.770 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.771 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.771 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.771 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.771 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.771 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.771 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.771 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:42.771 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:43.029 [2024-07-15 18:29:35.342896] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 29d2beb1-42d8-11ef-9ade-d5fc5159efa5 '!=' 29d2beb1-42d8-11ef-9ade-d5fc5159efa5 ']' 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 62376 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 62376 ']' 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 62376 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 62376 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:43.029 killing process with pid 62376 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62376' 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 62376 00:15:43.029 [2024-07-15 18:29:35.377192] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.029 [2024-07-15 18:29:35.377221] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.029 [2024-07-15 18:29:35.377242] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.029 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 62376 00:15:43.029 [2024-07-15 18:29:35.377246] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ece05e34c80 name raid_bdev1, state offline 00:15:43.029 [2024-07-15 18:29:35.406147] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.288 18:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:43.288 00:15:43.288 real 0m13.766s 00:15:43.288 user 0m24.453s 00:15:43.288 sys 0m2.218s 00:15:43.288 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.288 18:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.288 ************************************ 00:15:43.288 END TEST raid_superblock_test 00:15:43.288 ************************************ 00:15:43.288 18:29:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:43.288 18:29:35 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:43.288 18:29:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:43.288 18:29:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.288 18:29:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.288 ************************************ 00:15:43.288 START TEST raid_read_error_test 00:15:43.288 ************************************ 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.l8WxmzM8sq 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62777 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62777 /var/tmp/spdk-raid.sock 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 62777 ']' 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.288 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.547 [2024-07-15 18:29:35.682759] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:43.547 [2024-07-15 18:29:35.682972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:44.114 EAL: TSC is not safe to use in SMP mode 00:15:44.114 EAL: TSC is not invariant 00:15:44.114 [2024-07-15 18:29:36.273433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.114 [2024-07-15 18:29:36.382245] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:44.114 [2024-07-15 18:29:36.384560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.114 [2024-07-15 18:29:36.385354] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.114 [2024-07-15 18:29:36.385371] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.372 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.372 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:44.372 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:44.372 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:44.631 BaseBdev1_malloc 00:15:44.631 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:45.198 true 00:15:45.198 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:45.198 [2024-07-15 18:29:37.533690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:45.198 [2024-07-15 18:29:37.533759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.198 [2024-07-15 18:29:37.533787] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fa50834780 00:15:45.198 [2024-07-15 18:29:37.533796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.198 [2024-07-15 18:29:37.534507] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.198 [2024-07-15 18:29:37.534532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.198 BaseBdev1 00:15:45.198 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:45.198 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:45.460 BaseBdev2_malloc 00:15:45.460 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:45.728 true 00:15:45.728 18:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:46.293 [2024-07-15 18:29:38.413768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:46.293 [2024-07-15 18:29:38.413846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.293 [2024-07-15 18:29:38.413875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fa50834c80 00:15:46.293 [2024-07-15 18:29:38.413885] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.293 [2024-07-15 18:29:38.414633] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.293 [2024-07-15 18:29:38.414661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:46.293 BaseBdev2 00:15:46.293 18:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:46.293 18:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:46.550 BaseBdev3_malloc 00:15:46.550 18:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:46.808 true 00:15:46.808 18:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:47.065 [2024-07-15 18:29:39.217831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:47.065 [2024-07-15 18:29:39.217905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.065 [2024-07-15 18:29:39.217934] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fa50835180 00:15:47.065 [2024-07-15 18:29:39.217943] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.065 [2024-07-15 18:29:39.218672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.065 [2024-07-15 18:29:39.218697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:47.065 BaseBdev3 00:15:47.065 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:47.065 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:47.323 BaseBdev4_malloc 00:15:47.323 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:47.580 true 00:15:47.580 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:47.836 [2024-07-15 18:29:40.017884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:47.836 [2024-07-15 18:29:40.017955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.836 [2024-07-15 18:29:40.017984] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fa50835680 00:15:47.836 [2024-07-15 18:29:40.017994] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.836 [2024-07-15 18:29:40.018708] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.836 [2024-07-15 18:29:40.018734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:47.836 BaseBdev4 00:15:47.836 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:48.093 [2024-07-15 18:29:40.349925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.093 [2024-07-15 18:29:40.350574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.093 [2024-07-15 18:29:40.350603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.093 [2024-07-15 18:29:40.350620] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:48.093 [2024-07-15 18:29:40.350692] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24fa50835900 00:15:48.093 [2024-07-15 18:29:40.350698] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:48.093 [2024-07-15 18:29:40.350739] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24fa508a0e20 00:15:48.093 [2024-07-15 18:29:40.350817] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24fa50835900 00:15:48.093 [2024-07-15 18:29:40.350822] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24fa50835900 00:15:48.093 [2024-07-15 18:29:40.350851] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.093 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.351 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.351 "name": "raid_bdev1", 00:15:48.351 "uuid": "32bea681-42d8-11ef-9ade-d5fc5159efa5", 00:15:48.351 "strip_size_kb": 64, 00:15:48.351 "state": "online", 00:15:48.351 "raid_level": "concat", 00:15:48.351 "superblock": true, 00:15:48.351 "num_base_bdevs": 4, 00:15:48.351 "num_base_bdevs_discovered": 4, 00:15:48.351 "num_base_bdevs_operational": 4, 00:15:48.351 "base_bdevs_list": [ 00:15:48.351 { 00:15:48.351 "name": "BaseBdev1", 00:15:48.351 "uuid": "82bc99ac-dfce-6d58-81a8-01ca91bfd36f", 00:15:48.351 "is_configured": true, 00:15:48.351 "data_offset": 2048, 00:15:48.351 "data_size": 63488 00:15:48.351 }, 00:15:48.351 { 00:15:48.351 "name": "BaseBdev2", 00:15:48.351 "uuid": "19c047ab-e860-0e58-8c23-83fb21795209", 00:15:48.351 "is_configured": true, 00:15:48.351 "data_offset": 2048, 00:15:48.351 "data_size": 63488 00:15:48.351 }, 00:15:48.351 { 00:15:48.351 "name": "BaseBdev3", 00:15:48.351 "uuid": "13d15a17-6f59-0d59-9db1-86e8e646f3ad", 00:15:48.351 "is_configured": true, 00:15:48.351 "data_offset": 2048, 00:15:48.351 "data_size": 63488 00:15:48.351 }, 00:15:48.351 { 00:15:48.351 "name": "BaseBdev4", 00:15:48.351 "uuid": "cfdce8b1-440c-c65a-be99-4d40458090ea", 00:15:48.351 "is_configured": true, 00:15:48.351 "data_offset": 2048, 00:15:48.351 "data_size": 63488 00:15:48.351 } 00:15:48.351 ] 00:15:48.351 }' 00:15:48.351 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.351 18:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.609 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:48.609 18:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:48.868 [2024-07-15 18:29:41.110193] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24fa508a0ec0 00:15:49.827 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.085 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.343 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.343 "name": "raid_bdev1", 00:15:50.343 "uuid": "32bea681-42d8-11ef-9ade-d5fc5159efa5", 00:15:50.343 "strip_size_kb": 64, 00:15:50.343 "state": "online", 00:15:50.343 "raid_level": "concat", 00:15:50.343 "superblock": true, 00:15:50.343 "num_base_bdevs": 4, 00:15:50.343 "num_base_bdevs_discovered": 4, 00:15:50.343 "num_base_bdevs_operational": 4, 00:15:50.343 "base_bdevs_list": [ 00:15:50.343 { 00:15:50.343 "name": "BaseBdev1", 00:15:50.343 "uuid": "82bc99ac-dfce-6d58-81a8-01ca91bfd36f", 00:15:50.343 "is_configured": true, 00:15:50.343 "data_offset": 2048, 00:15:50.343 "data_size": 63488 00:15:50.343 }, 00:15:50.343 { 00:15:50.343 "name": "BaseBdev2", 00:15:50.343 "uuid": "19c047ab-e860-0e58-8c23-83fb21795209", 00:15:50.343 "is_configured": true, 00:15:50.343 "data_offset": 2048, 00:15:50.343 "data_size": 63488 00:15:50.343 }, 00:15:50.343 { 00:15:50.343 "name": "BaseBdev3", 00:15:50.343 "uuid": "13d15a17-6f59-0d59-9db1-86e8e646f3ad", 00:15:50.343 "is_configured": true, 00:15:50.343 "data_offset": 2048, 00:15:50.343 "data_size": 63488 00:15:50.343 }, 00:15:50.343 { 00:15:50.343 "name": "BaseBdev4", 00:15:50.343 "uuid": "cfdce8b1-440c-c65a-be99-4d40458090ea", 00:15:50.343 "is_configured": true, 00:15:50.343 "data_offset": 2048, 00:15:50.343 "data_size": 63488 00:15:50.343 } 00:15:50.343 ] 00:15:50.343 }' 00:15:50.343 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.343 18:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.602 18:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:50.860 [2024-07-15 18:29:43.117491] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.860 [2024-07-15 18:29:43.117523] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.860 [2024-07-15 18:29:43.118014] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.860 [2024-07-15 18:29:43.118040] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.860 [2024-07-15 18:29:43.118056] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.860 [2024-07-15 18:29:43.118063] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24fa50835900 name raid_bdev1, state offline 00:15:50.860 0 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62777 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 62777 ']' 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 62777 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62777 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:15:50.860 killing process with pid 62777 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62777' 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 62777 00:15:50.860 [2024-07-15 18:29:43.148378] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.860 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 62777 00:15:50.860 [2024-07-15 18:29:43.176319] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.l8WxmzM8sq 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:15:51.117 00:15:51.117 real 0m7.732s 00:15:51.117 user 0m12.564s 00:15:51.117 sys 0m1.125s 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.117 ************************************ 00:15:51.117 END TEST raid_read_error_test 00:15:51.117 ************************************ 00:15:51.117 18:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.117 18:29:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:51.117 18:29:43 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:51.117 18:29:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:51.117 18:29:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.117 18:29:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.117 ************************************ 00:15:51.117 START TEST raid_write_error_test 00:15:51.117 ************************************ 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.mDgQf3IiIR 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62915 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62915 /var/tmp/spdk-raid.sock 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 62915 ']' 00:15:51.117 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:51.118 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:51.118 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:51.118 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.118 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.118 [2024-07-15 18:29:43.458566] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:51.118 [2024-07-15 18:29:43.458799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:51.682 EAL: TSC is not safe to use in SMP mode 00:15:51.682 EAL: TSC is not invariant 00:15:51.682 [2024-07-15 18:29:44.053913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.939 [2024-07-15 18:29:44.160485] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:51.939 [2024-07-15 18:29:44.162593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.939 [2024-07-15 18:29:44.163381] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.939 [2024-07-15 18:29:44.163395] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.197 18:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.197 18:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:52.197 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:52.197 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:52.454 BaseBdev1_malloc 00:15:52.454 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:52.712 true 00:15:52.712 18:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:52.969 [2024-07-15 18:29:45.275709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:52.969 [2024-07-15 18:29:45.275793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.969 [2024-07-15 18:29:45.275824] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b5236c34780 00:15:52.969 [2024-07-15 18:29:45.275833] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.969 [2024-07-15 18:29:45.276625] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.969 [2024-07-15 18:29:45.276651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.969 BaseBdev1 00:15:52.969 18:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:52.969 18:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:53.251 BaseBdev2_malloc 00:15:53.251 18:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:53.508 true 00:15:53.508 18:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:53.766 [2024-07-15 18:29:46.003751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:53.766 [2024-07-15 18:29:46.003810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.766 [2024-07-15 18:29:46.003839] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b5236c34c80 00:15:53.766 [2024-07-15 18:29:46.003849] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.766 [2024-07-15 18:29:46.004640] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.766 [2024-07-15 18:29:46.004664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:53.766 BaseBdev2 00:15:53.766 18:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:53.766 18:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.024 BaseBdev3_malloc 00:15:54.024 18:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:54.281 true 00:15:54.281 18:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:54.539 [2024-07-15 18:29:46.827812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:54.539 [2024-07-15 18:29:46.827878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.539 [2024-07-15 18:29:46.827908] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b5236c35180 00:15:54.539 [2024-07-15 18:29:46.827918] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.539 [2024-07-15 18:29:46.828696] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.539 [2024-07-15 18:29:46.828721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.539 BaseBdev3 00:15:54.539 18:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:54.539 18:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:54.797 BaseBdev4_malloc 00:15:54.797 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:55.054 true 00:15:55.054 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:55.312 [2024-07-15 18:29:47.679871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:55.312 [2024-07-15 18:29:47.679930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.312 [2024-07-15 18:29:47.679960] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b5236c35680 00:15:55.312 [2024-07-15 18:29:47.679969] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.312 [2024-07-15 18:29:47.680750] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.312 [2024-07-15 18:29:47.680776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:55.312 BaseBdev4 00:15:55.312 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:55.570 [2024-07-15 18:29:47.919906] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.570 [2024-07-15 18:29:47.920624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.570 [2024-07-15 18:29:47.920651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.570 [2024-07-15 18:29:47.920668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.570 [2024-07-15 18:29:47.920740] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2b5236c35900 00:15:55.570 [2024-07-15 18:29:47.920761] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:55.570 [2024-07-15 18:29:47.920816] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2b5236ca0e20 00:15:55.570 [2024-07-15 18:29:47.920900] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2b5236c35900 00:15:55.571 [2024-07-15 18:29:47.920904] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2b5236c35900 00:15:55.571 [2024-07-15 18:29:47.920932] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.571 18:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.829 18:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.829 "name": "raid_bdev1", 00:15:55.829 "uuid": "3741bcf7-42d8-11ef-9ade-d5fc5159efa5", 00:15:55.829 "strip_size_kb": 64, 00:15:55.829 "state": "online", 00:15:55.829 "raid_level": "concat", 00:15:55.829 "superblock": true, 00:15:55.829 "num_base_bdevs": 4, 00:15:55.829 "num_base_bdevs_discovered": 4, 00:15:55.829 "num_base_bdevs_operational": 4, 00:15:55.829 "base_bdevs_list": [ 00:15:55.829 { 00:15:55.829 "name": "BaseBdev1", 00:15:55.829 "uuid": "4ade0e2d-6cf9-705a-b54d-c5c9f9becba5", 00:15:55.829 "is_configured": true, 00:15:55.829 "data_offset": 2048, 00:15:55.829 "data_size": 63488 00:15:55.829 }, 00:15:55.829 { 00:15:55.829 "name": "BaseBdev2", 00:15:55.829 "uuid": "d44cc36d-b1b3-2d52-bdd1-afad4b6dd89e", 00:15:55.829 "is_configured": true, 00:15:55.829 "data_offset": 2048, 00:15:55.829 "data_size": 63488 00:15:55.829 }, 00:15:55.829 { 00:15:55.829 "name": "BaseBdev3", 00:15:55.829 "uuid": "2bc70b4f-6147-5e58-b596-5f7c11f8b811", 00:15:55.829 "is_configured": true, 00:15:55.829 "data_offset": 2048, 00:15:55.829 "data_size": 63488 00:15:55.829 }, 00:15:55.829 { 00:15:55.829 "name": "BaseBdev4", 00:15:55.829 "uuid": "62a7e096-555f-ff51-932e-fbfeacb9dfa7", 00:15:55.829 "is_configured": true, 00:15:55.829 "data_offset": 2048, 00:15:55.829 "data_size": 63488 00:15:55.829 } 00:15:55.829 ] 00:15:55.829 }' 00:15:55.829 18:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.829 18:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.087 18:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:56.087 18:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:56.346 [2024-07-15 18:29:48.576174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2b5236ca0ec0 00:15:57.282 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.542 18:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.801 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.801 "name": "raid_bdev1", 00:15:57.801 "uuid": "3741bcf7-42d8-11ef-9ade-d5fc5159efa5", 00:15:57.801 "strip_size_kb": 64, 00:15:57.801 "state": "online", 00:15:57.801 "raid_level": "concat", 00:15:57.801 "superblock": true, 00:15:57.801 "num_base_bdevs": 4, 00:15:57.801 "num_base_bdevs_discovered": 4, 00:15:57.801 "num_base_bdevs_operational": 4, 00:15:57.801 "base_bdevs_list": [ 00:15:57.801 { 00:15:57.801 "name": "BaseBdev1", 00:15:57.801 "uuid": "4ade0e2d-6cf9-705a-b54d-c5c9f9becba5", 00:15:57.801 "is_configured": true, 00:15:57.801 "data_offset": 2048, 00:15:57.801 "data_size": 63488 00:15:57.801 }, 00:15:57.801 { 00:15:57.801 "name": "BaseBdev2", 00:15:57.801 "uuid": "d44cc36d-b1b3-2d52-bdd1-afad4b6dd89e", 00:15:57.801 "is_configured": true, 00:15:57.801 "data_offset": 2048, 00:15:57.801 "data_size": 63488 00:15:57.801 }, 00:15:57.801 { 00:15:57.801 "name": "BaseBdev3", 00:15:57.801 "uuid": "2bc70b4f-6147-5e58-b596-5f7c11f8b811", 00:15:57.801 "is_configured": true, 00:15:57.801 "data_offset": 2048, 00:15:57.801 "data_size": 63488 00:15:57.801 }, 00:15:57.801 { 00:15:57.801 "name": "BaseBdev4", 00:15:57.801 "uuid": "62a7e096-555f-ff51-932e-fbfeacb9dfa7", 00:15:57.801 "is_configured": true, 00:15:57.801 "data_offset": 2048, 00:15:57.801 "data_size": 63488 00:15:57.801 } 00:15:57.801 ] 00:15:57.801 }' 00:15:57.801 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.801 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.060 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:58.319 [2024-07-15 18:29:50.655534] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.319 [2024-07-15 18:29:50.655562] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.319 [2024-07-15 18:29:50.655896] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.319 [2024-07-15 18:29:50.655907] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.319 [2024-07-15 18:29:50.655917] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.319 [2024-07-15 18:29:50.655921] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b5236c35900 name raid_bdev1, state offline 00:15:58.319 0 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62915 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 62915 ']' 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 62915 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62915 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:15:58.319 killing process with pid 62915 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62915' 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 62915 00:15:58.319 [2024-07-15 18:29:50.685895] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.319 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 62915 00:15:58.577 [2024-07-15 18:29:50.713642] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.mDgQf3IiIR 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:15:58.577 00:15:58.577 real 0m7.494s 00:15:58.577 user 0m11.930s 00:15:58.577 sys 0m1.224s 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.577 18:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.577 ************************************ 00:15:58.577 END TEST raid_write_error_test 00:15:58.577 ************************************ 00:15:58.837 18:29:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:58.837 18:29:50 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:58.837 18:29:50 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:58.837 18:29:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:58.837 18:29:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.837 18:29:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.837 ************************************ 00:15:58.837 START TEST raid_state_function_test 00:15:58.837 ************************************ 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=63055 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63055' 00:15:58.837 Process raid pid: 63055 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 63055 /var/tmp/spdk-raid.sock 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 63055 ']' 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.837 18:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.837 [2024-07-15 18:29:50.999196] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:58.837 [2024-07-15 18:29:50.999428] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:59.405 EAL: TSC is not safe to use in SMP mode 00:15:59.405 EAL: TSC is not invariant 00:15:59.405 [2024-07-15 18:29:51.601776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.405 [2024-07-15 18:29:51.718148] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:59.405 [2024-07-15 18:29:51.720592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.405 [2024-07-15 18:29:51.721590] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.405 [2024-07-15 18:29:51.721618] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.664 18:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.664 18:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:59.664 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:59.923 [2024-07-15 18:29:52.291056] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.923 [2024-07-15 18:29:52.291117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.923 [2024-07-15 18:29:52.291124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.923 [2024-07-15 18:29:52.291133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.923 [2024-07-15 18:29:52.291137] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:59.923 [2024-07-15 18:29:52.291144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:59.923 [2024-07-15 18:29:52.291148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:59.923 [2024-07-15 18:29:52.291155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.923 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.181 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.181 "name": "Existed_Raid", 00:16:00.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.181 "strip_size_kb": 0, 00:16:00.181 "state": "configuring", 00:16:00.181 "raid_level": "raid1", 00:16:00.181 "superblock": false, 00:16:00.181 "num_base_bdevs": 4, 00:16:00.181 "num_base_bdevs_discovered": 0, 00:16:00.181 "num_base_bdevs_operational": 4, 00:16:00.181 "base_bdevs_list": [ 00:16:00.181 { 00:16:00.181 "name": "BaseBdev1", 00:16:00.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.181 "is_configured": false, 00:16:00.181 "data_offset": 0, 00:16:00.181 "data_size": 0 00:16:00.181 }, 00:16:00.181 { 00:16:00.181 "name": "BaseBdev2", 00:16:00.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.181 "is_configured": false, 00:16:00.181 "data_offset": 0, 00:16:00.181 "data_size": 0 00:16:00.181 }, 00:16:00.181 { 00:16:00.181 "name": "BaseBdev3", 00:16:00.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.181 "is_configured": false, 00:16:00.181 "data_offset": 0, 00:16:00.181 "data_size": 0 00:16:00.181 }, 00:16:00.181 { 00:16:00.181 "name": "BaseBdev4", 00:16:00.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.181 "is_configured": false, 00:16:00.181 "data_offset": 0, 00:16:00.181 "data_size": 0 00:16:00.181 } 00:16:00.181 ] 00:16:00.181 }' 00:16:00.181 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.181 18:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.772 18:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.772 [2024-07-15 18:29:53.139092] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.772 [2024-07-15 18:29:53.139122] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33ff33834500 name Existed_Raid, state configuring 00:16:00.772 18:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:01.034 [2024-07-15 18:29:53.427124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.034 [2024-07-15 18:29:53.427178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.034 [2024-07-15 18:29:53.427184] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.034 [2024-07-15 18:29:53.427194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.034 [2024-07-15 18:29:53.427197] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.034 [2024-07-15 18:29:53.427205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.034 [2024-07-15 18:29:53.427209] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:01.034 [2024-07-15 18:29:53.427217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:01.291 18:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.550 [2024-07-15 18:29:53.716252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.550 BaseBdev1 00:16:01.550 18:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:01.550 18:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:01.550 18:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:01.550 18:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:01.550 18:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:01.550 18:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:01.550 18:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.810 18:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:02.068 [ 00:16:02.068 { 00:16:02.068 "name": "BaseBdev1", 00:16:02.068 "aliases": [ 00:16:02.068 "3ab6064c-42d8-11ef-9ade-d5fc5159efa5" 00:16:02.068 ], 00:16:02.068 "product_name": "Malloc disk", 00:16:02.068 "block_size": 512, 00:16:02.068 "num_blocks": 65536, 00:16:02.068 "uuid": "3ab6064c-42d8-11ef-9ade-d5fc5159efa5", 00:16:02.068 "assigned_rate_limits": { 00:16:02.068 "rw_ios_per_sec": 0, 00:16:02.068 "rw_mbytes_per_sec": 0, 00:16:02.068 "r_mbytes_per_sec": 0, 00:16:02.068 "w_mbytes_per_sec": 0 00:16:02.068 }, 00:16:02.069 "claimed": true, 00:16:02.069 "claim_type": "exclusive_write", 00:16:02.069 "zoned": false, 00:16:02.069 "supported_io_types": { 00:16:02.069 "read": true, 00:16:02.069 "write": true, 00:16:02.069 "unmap": true, 00:16:02.069 "flush": true, 00:16:02.069 "reset": true, 00:16:02.069 "nvme_admin": false, 00:16:02.069 "nvme_io": false, 00:16:02.069 "nvme_io_md": false, 00:16:02.069 "write_zeroes": true, 00:16:02.069 "zcopy": true, 00:16:02.069 "get_zone_info": false, 00:16:02.069 "zone_management": false, 00:16:02.069 "zone_append": false, 00:16:02.069 "compare": false, 00:16:02.069 "compare_and_write": false, 00:16:02.069 "abort": true, 00:16:02.069 "seek_hole": false, 00:16:02.069 "seek_data": false, 00:16:02.069 "copy": true, 00:16:02.069 "nvme_iov_md": false 00:16:02.069 }, 00:16:02.069 "memory_domains": [ 00:16:02.069 { 00:16:02.069 "dma_device_id": "system", 00:16:02.069 "dma_device_type": 1 00:16:02.069 }, 00:16:02.069 { 00:16:02.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.069 "dma_device_type": 2 00:16:02.069 } 00:16:02.069 ], 00:16:02.069 "driver_specific": {} 00:16:02.069 } 00:16:02.069 ] 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.069 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.328 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.328 "name": "Existed_Raid", 00:16:02.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.328 "strip_size_kb": 0, 00:16:02.328 "state": "configuring", 00:16:02.328 "raid_level": "raid1", 00:16:02.328 "superblock": false, 00:16:02.328 "num_base_bdevs": 4, 00:16:02.328 "num_base_bdevs_discovered": 1, 00:16:02.328 "num_base_bdevs_operational": 4, 00:16:02.328 "base_bdevs_list": [ 00:16:02.328 { 00:16:02.328 "name": "BaseBdev1", 00:16:02.328 "uuid": "3ab6064c-42d8-11ef-9ade-d5fc5159efa5", 00:16:02.328 "is_configured": true, 00:16:02.328 "data_offset": 0, 00:16:02.328 "data_size": 65536 00:16:02.328 }, 00:16:02.328 { 00:16:02.328 "name": "BaseBdev2", 00:16:02.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.328 "is_configured": false, 00:16:02.328 "data_offset": 0, 00:16:02.328 "data_size": 0 00:16:02.328 }, 00:16:02.328 { 00:16:02.328 "name": "BaseBdev3", 00:16:02.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.328 "is_configured": false, 00:16:02.328 "data_offset": 0, 00:16:02.328 "data_size": 0 00:16:02.328 }, 00:16:02.328 { 00:16:02.328 "name": "BaseBdev4", 00:16:02.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.329 "is_configured": false, 00:16:02.329 "data_offset": 0, 00:16:02.329 "data_size": 0 00:16:02.329 } 00:16:02.329 ] 00:16:02.329 }' 00:16:02.329 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.329 18:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.588 18:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.847 [2024-07-15 18:29:55.031220] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.847 [2024-07-15 18:29:55.031251] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33ff33834500 name Existed_Raid, state configuring 00:16:02.847 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:03.106 [2024-07-15 18:29:55.319263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.106 [2024-07-15 18:29:55.320147] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.106 [2024-07-15 18:29:55.320188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.106 [2024-07-15 18:29:55.320195] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.106 [2024-07-15 18:29:55.320204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.106 [2024-07-15 18:29:55.320208] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.106 [2024-07-15 18:29:55.320215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.106 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.364 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.364 "name": "Existed_Raid", 00:16:03.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.364 "strip_size_kb": 0, 00:16:03.364 "state": "configuring", 00:16:03.364 "raid_level": "raid1", 00:16:03.364 "superblock": false, 00:16:03.364 "num_base_bdevs": 4, 00:16:03.364 "num_base_bdevs_discovered": 1, 00:16:03.364 "num_base_bdevs_operational": 4, 00:16:03.364 "base_bdevs_list": [ 00:16:03.364 { 00:16:03.364 "name": "BaseBdev1", 00:16:03.364 "uuid": "3ab6064c-42d8-11ef-9ade-d5fc5159efa5", 00:16:03.364 "is_configured": true, 00:16:03.364 "data_offset": 0, 00:16:03.364 "data_size": 65536 00:16:03.364 }, 00:16:03.364 { 00:16:03.364 "name": "BaseBdev2", 00:16:03.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.364 "is_configured": false, 00:16:03.364 "data_offset": 0, 00:16:03.364 "data_size": 0 00:16:03.364 }, 00:16:03.364 { 00:16:03.364 "name": "BaseBdev3", 00:16:03.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.364 "is_configured": false, 00:16:03.364 "data_offset": 0, 00:16:03.364 "data_size": 0 00:16:03.364 }, 00:16:03.364 { 00:16:03.364 "name": "BaseBdev4", 00:16:03.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.364 "is_configured": false, 00:16:03.364 "data_offset": 0, 00:16:03.364 "data_size": 0 00:16:03.364 } 00:16:03.364 ] 00:16:03.364 }' 00:16:03.364 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.364 18:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.622 18:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.880 [2024-07-15 18:29:56.171479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.880 BaseBdev2 00:16:03.880 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:03.880 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:03.880 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.880 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:03.880 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.880 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.880 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.139 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.397 [ 00:16:04.397 { 00:16:04.397 "name": "BaseBdev2", 00:16:04.397 "aliases": [ 00:16:04.397 "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5" 00:16:04.397 ], 00:16:04.397 "product_name": "Malloc disk", 00:16:04.397 "block_size": 512, 00:16:04.397 "num_blocks": 65536, 00:16:04.397 "uuid": "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5", 00:16:04.397 "assigned_rate_limits": { 00:16:04.397 "rw_ios_per_sec": 0, 00:16:04.397 "rw_mbytes_per_sec": 0, 00:16:04.397 "r_mbytes_per_sec": 0, 00:16:04.397 "w_mbytes_per_sec": 0 00:16:04.397 }, 00:16:04.397 "claimed": true, 00:16:04.397 "claim_type": "exclusive_write", 00:16:04.397 "zoned": false, 00:16:04.397 "supported_io_types": { 00:16:04.397 "read": true, 00:16:04.397 "write": true, 00:16:04.397 "unmap": true, 00:16:04.397 "flush": true, 00:16:04.397 "reset": true, 00:16:04.397 "nvme_admin": false, 00:16:04.397 "nvme_io": false, 00:16:04.397 "nvme_io_md": false, 00:16:04.397 "write_zeroes": true, 00:16:04.397 "zcopy": true, 00:16:04.397 "get_zone_info": false, 00:16:04.397 "zone_management": false, 00:16:04.397 "zone_append": false, 00:16:04.398 "compare": false, 00:16:04.398 "compare_and_write": false, 00:16:04.398 "abort": true, 00:16:04.398 "seek_hole": false, 00:16:04.398 "seek_data": false, 00:16:04.398 "copy": true, 00:16:04.398 "nvme_iov_md": false 00:16:04.398 }, 00:16:04.398 "memory_domains": [ 00:16:04.398 { 00:16:04.398 "dma_device_id": "system", 00:16:04.398 "dma_device_type": 1 00:16:04.398 }, 00:16:04.398 { 00:16:04.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.398 "dma_device_type": 2 00:16:04.398 } 00:16:04.398 ], 00:16:04.398 "driver_specific": {} 00:16:04.398 } 00:16:04.398 ] 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.398 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.660 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.660 "name": "Existed_Raid", 00:16:04.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.660 "strip_size_kb": 0, 00:16:04.660 "state": "configuring", 00:16:04.660 "raid_level": "raid1", 00:16:04.660 "superblock": false, 00:16:04.660 "num_base_bdevs": 4, 00:16:04.660 "num_base_bdevs_discovered": 2, 00:16:04.660 "num_base_bdevs_operational": 4, 00:16:04.660 "base_bdevs_list": [ 00:16:04.660 { 00:16:04.660 "name": "BaseBdev1", 00:16:04.660 "uuid": "3ab6064c-42d8-11ef-9ade-d5fc5159efa5", 00:16:04.660 "is_configured": true, 00:16:04.660 "data_offset": 0, 00:16:04.660 "data_size": 65536 00:16:04.660 }, 00:16:04.660 { 00:16:04.660 "name": "BaseBdev2", 00:16:04.660 "uuid": "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5", 00:16:04.660 "is_configured": true, 00:16:04.660 "data_offset": 0, 00:16:04.660 "data_size": 65536 00:16:04.660 }, 00:16:04.660 { 00:16:04.660 "name": "BaseBdev3", 00:16:04.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.660 "is_configured": false, 00:16:04.660 "data_offset": 0, 00:16:04.660 "data_size": 0 00:16:04.660 }, 00:16:04.660 { 00:16:04.660 "name": "BaseBdev4", 00:16:04.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.660 "is_configured": false, 00:16:04.660 "data_offset": 0, 00:16:04.660 "data_size": 0 00:16:04.660 } 00:16:04.660 ] 00:16:04.660 }' 00:16:04.660 18:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.660 18:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.916 18:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.174 [2024-07-15 18:29:57.483578] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.175 BaseBdev3 00:16:05.175 18:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:05.175 18:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:05.175 18:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:05.175 18:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:05.175 18:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:05.175 18:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:05.175 18:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.439 18:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.699 [ 00:16:05.699 { 00:16:05.699 "name": "BaseBdev3", 00:16:05.699 "aliases": [ 00:16:05.699 "3cf50534-42d8-11ef-9ade-d5fc5159efa5" 00:16:05.699 ], 00:16:05.699 "product_name": "Malloc disk", 00:16:05.699 "block_size": 512, 00:16:05.699 "num_blocks": 65536, 00:16:05.699 "uuid": "3cf50534-42d8-11ef-9ade-d5fc5159efa5", 00:16:05.699 "assigned_rate_limits": { 00:16:05.699 "rw_ios_per_sec": 0, 00:16:05.699 "rw_mbytes_per_sec": 0, 00:16:05.699 "r_mbytes_per_sec": 0, 00:16:05.699 "w_mbytes_per_sec": 0 00:16:05.699 }, 00:16:05.699 "claimed": true, 00:16:05.699 "claim_type": "exclusive_write", 00:16:05.699 "zoned": false, 00:16:05.699 "supported_io_types": { 00:16:05.699 "read": true, 00:16:05.699 "write": true, 00:16:05.699 "unmap": true, 00:16:05.699 "flush": true, 00:16:05.699 "reset": true, 00:16:05.699 "nvme_admin": false, 00:16:05.699 "nvme_io": false, 00:16:05.699 "nvme_io_md": false, 00:16:05.699 "write_zeroes": true, 00:16:05.699 "zcopy": true, 00:16:05.699 "get_zone_info": false, 00:16:05.699 "zone_management": false, 00:16:05.699 "zone_append": false, 00:16:05.699 "compare": false, 00:16:05.699 "compare_and_write": false, 00:16:05.699 "abort": true, 00:16:05.699 "seek_hole": false, 00:16:05.699 "seek_data": false, 00:16:05.699 "copy": true, 00:16:05.699 "nvme_iov_md": false 00:16:05.699 }, 00:16:05.699 "memory_domains": [ 00:16:05.699 { 00:16:05.699 "dma_device_id": "system", 00:16:05.699 "dma_device_type": 1 00:16:05.699 }, 00:16:05.699 { 00:16:05.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.699 "dma_device_type": 2 00:16:05.699 } 00:16:05.699 ], 00:16:05.699 "driver_specific": {} 00:16:05.699 } 00:16:05.699 ] 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.699 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.957 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.957 "name": "Existed_Raid", 00:16:05.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.957 "strip_size_kb": 0, 00:16:05.957 "state": "configuring", 00:16:05.957 "raid_level": "raid1", 00:16:05.957 "superblock": false, 00:16:05.957 "num_base_bdevs": 4, 00:16:05.957 "num_base_bdevs_discovered": 3, 00:16:05.957 "num_base_bdevs_operational": 4, 00:16:05.957 "base_bdevs_list": [ 00:16:05.957 { 00:16:05.957 "name": "BaseBdev1", 00:16:05.957 "uuid": "3ab6064c-42d8-11ef-9ade-d5fc5159efa5", 00:16:05.957 "is_configured": true, 00:16:05.957 "data_offset": 0, 00:16:05.957 "data_size": 65536 00:16:05.957 }, 00:16:05.957 { 00:16:05.957 "name": "BaseBdev2", 00:16:05.957 "uuid": "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5", 00:16:05.957 "is_configured": true, 00:16:05.957 "data_offset": 0, 00:16:05.957 "data_size": 65536 00:16:05.957 }, 00:16:05.957 { 00:16:05.957 "name": "BaseBdev3", 00:16:05.957 "uuid": "3cf50534-42d8-11ef-9ade-d5fc5159efa5", 00:16:05.957 "is_configured": true, 00:16:05.957 "data_offset": 0, 00:16:05.957 "data_size": 65536 00:16:05.957 }, 00:16:05.957 { 00:16:05.957 "name": "BaseBdev4", 00:16:05.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.957 "is_configured": false, 00:16:05.957 "data_offset": 0, 00:16:05.957 "data_size": 0 00:16:05.957 } 00:16:05.957 ] 00:16:05.957 }' 00:16:05.957 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.957 18:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.524 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.524 [2024-07-15 18:29:58.911665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.524 [2024-07-15 18:29:58.911698] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33ff33834a00 00:16:06.524 [2024-07-15 18:29:58.911703] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:06.524 [2024-07-15 18:29:58.911734] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33ff33897e20 00:16:06.524 [2024-07-15 18:29:58.911835] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33ff33834a00 00:16:06.524 [2024-07-15 18:29:58.911840] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x33ff33834a00 00:16:06.524 [2024-07-15 18:29:58.911874] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.524 BaseBdev4 00:16:06.783 18:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:06.783 18:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:06.783 18:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:06.783 18:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:06.783 18:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:06.783 18:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:06.783 18:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:07.042 18:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:07.301 [ 00:16:07.301 { 00:16:07.301 "name": "BaseBdev4", 00:16:07.301 "aliases": [ 00:16:07.301 "3dceeda6-42d8-11ef-9ade-d5fc5159efa5" 00:16:07.301 ], 00:16:07.301 "product_name": "Malloc disk", 00:16:07.301 "block_size": 512, 00:16:07.301 "num_blocks": 65536, 00:16:07.301 "uuid": "3dceeda6-42d8-11ef-9ade-d5fc5159efa5", 00:16:07.301 "assigned_rate_limits": { 00:16:07.301 "rw_ios_per_sec": 0, 00:16:07.301 "rw_mbytes_per_sec": 0, 00:16:07.301 "r_mbytes_per_sec": 0, 00:16:07.301 "w_mbytes_per_sec": 0 00:16:07.301 }, 00:16:07.301 "claimed": true, 00:16:07.301 "claim_type": "exclusive_write", 00:16:07.301 "zoned": false, 00:16:07.301 "supported_io_types": { 00:16:07.301 "read": true, 00:16:07.301 "write": true, 00:16:07.301 "unmap": true, 00:16:07.301 "flush": true, 00:16:07.301 "reset": true, 00:16:07.301 "nvme_admin": false, 00:16:07.301 "nvme_io": false, 00:16:07.301 "nvme_io_md": false, 00:16:07.301 "write_zeroes": true, 00:16:07.301 "zcopy": true, 00:16:07.301 "get_zone_info": false, 00:16:07.301 "zone_management": false, 00:16:07.301 "zone_append": false, 00:16:07.301 "compare": false, 00:16:07.301 "compare_and_write": false, 00:16:07.301 "abort": true, 00:16:07.301 "seek_hole": false, 00:16:07.301 "seek_data": false, 00:16:07.301 "copy": true, 00:16:07.301 "nvme_iov_md": false 00:16:07.301 }, 00:16:07.301 "memory_domains": [ 00:16:07.301 { 00:16:07.301 "dma_device_id": "system", 00:16:07.301 "dma_device_type": 1 00:16:07.301 }, 00:16:07.301 { 00:16:07.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.301 "dma_device_type": 2 00:16:07.301 } 00:16:07.301 ], 00:16:07.301 "driver_specific": {} 00:16:07.301 } 00:16:07.301 ] 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.301 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.560 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.560 "name": "Existed_Raid", 00:16:07.560 "uuid": "3dcef4e9-42d8-11ef-9ade-d5fc5159efa5", 00:16:07.560 "strip_size_kb": 0, 00:16:07.560 "state": "online", 00:16:07.560 "raid_level": "raid1", 00:16:07.560 "superblock": false, 00:16:07.560 "num_base_bdevs": 4, 00:16:07.560 "num_base_bdevs_discovered": 4, 00:16:07.560 "num_base_bdevs_operational": 4, 00:16:07.560 "base_bdevs_list": [ 00:16:07.560 { 00:16:07.560 "name": "BaseBdev1", 00:16:07.560 "uuid": "3ab6064c-42d8-11ef-9ade-d5fc5159efa5", 00:16:07.560 "is_configured": true, 00:16:07.560 "data_offset": 0, 00:16:07.560 "data_size": 65536 00:16:07.560 }, 00:16:07.560 { 00:16:07.560 "name": "BaseBdev2", 00:16:07.560 "uuid": "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5", 00:16:07.560 "is_configured": true, 00:16:07.560 "data_offset": 0, 00:16:07.560 "data_size": 65536 00:16:07.560 }, 00:16:07.560 { 00:16:07.560 "name": "BaseBdev3", 00:16:07.560 "uuid": "3cf50534-42d8-11ef-9ade-d5fc5159efa5", 00:16:07.560 "is_configured": true, 00:16:07.560 "data_offset": 0, 00:16:07.560 "data_size": 65536 00:16:07.560 }, 00:16:07.560 { 00:16:07.560 "name": "BaseBdev4", 00:16:07.560 "uuid": "3dceeda6-42d8-11ef-9ade-d5fc5159efa5", 00:16:07.560 "is_configured": true, 00:16:07.560 "data_offset": 0, 00:16:07.560 "data_size": 65536 00:16:07.560 } 00:16:07.560 ] 00:16:07.560 }' 00:16:07.560 18:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.560 18:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.818 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:07.818 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:07.818 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:07.818 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:07.818 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:07.818 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:07.818 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:07.818 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:08.386 [2024-07-15 18:30:00.499691] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.386 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:08.386 "name": "Existed_Raid", 00:16:08.386 "aliases": [ 00:16:08.386 "3dcef4e9-42d8-11ef-9ade-d5fc5159efa5" 00:16:08.386 ], 00:16:08.386 "product_name": "Raid Volume", 00:16:08.386 "block_size": 512, 00:16:08.386 "num_blocks": 65536, 00:16:08.386 "uuid": "3dcef4e9-42d8-11ef-9ade-d5fc5159efa5", 00:16:08.387 "assigned_rate_limits": { 00:16:08.387 "rw_ios_per_sec": 0, 00:16:08.387 "rw_mbytes_per_sec": 0, 00:16:08.387 "r_mbytes_per_sec": 0, 00:16:08.387 "w_mbytes_per_sec": 0 00:16:08.387 }, 00:16:08.387 "claimed": false, 00:16:08.387 "zoned": false, 00:16:08.387 "supported_io_types": { 00:16:08.387 "read": true, 00:16:08.387 "write": true, 00:16:08.387 "unmap": false, 00:16:08.387 "flush": false, 00:16:08.387 "reset": true, 00:16:08.387 "nvme_admin": false, 00:16:08.387 "nvme_io": false, 00:16:08.387 "nvme_io_md": false, 00:16:08.387 "write_zeroes": true, 00:16:08.387 "zcopy": false, 00:16:08.387 "get_zone_info": false, 00:16:08.387 "zone_management": false, 00:16:08.387 "zone_append": false, 00:16:08.387 "compare": false, 00:16:08.387 "compare_and_write": false, 00:16:08.387 "abort": false, 00:16:08.387 "seek_hole": false, 00:16:08.387 "seek_data": false, 00:16:08.387 "copy": false, 00:16:08.387 "nvme_iov_md": false 00:16:08.387 }, 00:16:08.387 "memory_domains": [ 00:16:08.387 { 00:16:08.387 "dma_device_id": "system", 00:16:08.387 "dma_device_type": 1 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.387 "dma_device_type": 2 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "dma_device_id": "system", 00:16:08.387 "dma_device_type": 1 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.387 "dma_device_type": 2 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "dma_device_id": "system", 00:16:08.387 "dma_device_type": 1 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.387 "dma_device_type": 2 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "dma_device_id": "system", 00:16:08.387 "dma_device_type": 1 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.387 "dma_device_type": 2 00:16:08.387 } 00:16:08.387 ], 00:16:08.387 "driver_specific": { 00:16:08.387 "raid": { 00:16:08.387 "uuid": "3dcef4e9-42d8-11ef-9ade-d5fc5159efa5", 00:16:08.387 "strip_size_kb": 0, 00:16:08.387 "state": "online", 00:16:08.387 "raid_level": "raid1", 00:16:08.387 "superblock": false, 00:16:08.387 "num_base_bdevs": 4, 00:16:08.387 "num_base_bdevs_discovered": 4, 00:16:08.387 "num_base_bdevs_operational": 4, 00:16:08.387 "base_bdevs_list": [ 00:16:08.387 { 00:16:08.387 "name": "BaseBdev1", 00:16:08.387 "uuid": "3ab6064c-42d8-11ef-9ade-d5fc5159efa5", 00:16:08.387 "is_configured": true, 00:16:08.387 "data_offset": 0, 00:16:08.387 "data_size": 65536 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "name": "BaseBdev2", 00:16:08.387 "uuid": "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5", 00:16:08.387 "is_configured": true, 00:16:08.387 "data_offset": 0, 00:16:08.387 "data_size": 65536 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "name": "BaseBdev3", 00:16:08.387 "uuid": "3cf50534-42d8-11ef-9ade-d5fc5159efa5", 00:16:08.387 "is_configured": true, 00:16:08.387 "data_offset": 0, 00:16:08.387 "data_size": 65536 00:16:08.387 }, 00:16:08.387 { 00:16:08.387 "name": "BaseBdev4", 00:16:08.387 "uuid": "3dceeda6-42d8-11ef-9ade-d5fc5159efa5", 00:16:08.387 "is_configured": true, 00:16:08.387 "data_offset": 0, 00:16:08.387 "data_size": 65536 00:16:08.387 } 00:16:08.387 ] 00:16:08.387 } 00:16:08.387 } 00:16:08.387 }' 00:16:08.387 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.387 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:08.387 BaseBdev2 00:16:08.387 BaseBdev3 00:16:08.387 BaseBdev4' 00:16:08.387 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:08.387 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:08.387 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:08.646 "name": "BaseBdev1", 00:16:08.646 "aliases": [ 00:16:08.646 "3ab6064c-42d8-11ef-9ade-d5fc5159efa5" 00:16:08.646 ], 00:16:08.646 "product_name": "Malloc disk", 00:16:08.646 "block_size": 512, 00:16:08.646 "num_blocks": 65536, 00:16:08.646 "uuid": "3ab6064c-42d8-11ef-9ade-d5fc5159efa5", 00:16:08.646 "assigned_rate_limits": { 00:16:08.646 "rw_ios_per_sec": 0, 00:16:08.646 "rw_mbytes_per_sec": 0, 00:16:08.646 "r_mbytes_per_sec": 0, 00:16:08.646 "w_mbytes_per_sec": 0 00:16:08.646 }, 00:16:08.646 "claimed": true, 00:16:08.646 "claim_type": "exclusive_write", 00:16:08.646 "zoned": false, 00:16:08.646 "supported_io_types": { 00:16:08.646 "read": true, 00:16:08.646 "write": true, 00:16:08.646 "unmap": true, 00:16:08.646 "flush": true, 00:16:08.646 "reset": true, 00:16:08.646 "nvme_admin": false, 00:16:08.646 "nvme_io": false, 00:16:08.646 "nvme_io_md": false, 00:16:08.646 "write_zeroes": true, 00:16:08.646 "zcopy": true, 00:16:08.646 "get_zone_info": false, 00:16:08.646 "zone_management": false, 00:16:08.646 "zone_append": false, 00:16:08.646 "compare": false, 00:16:08.646 "compare_and_write": false, 00:16:08.646 "abort": true, 00:16:08.646 "seek_hole": false, 00:16:08.646 "seek_data": false, 00:16:08.646 "copy": true, 00:16:08.646 "nvme_iov_md": false 00:16:08.646 }, 00:16:08.646 "memory_domains": [ 00:16:08.646 { 00:16:08.646 "dma_device_id": "system", 00:16:08.646 "dma_device_type": 1 00:16:08.646 }, 00:16:08.646 { 00:16:08.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.646 "dma_device_type": 2 00:16:08.646 } 00:16:08.646 ], 00:16:08.646 "driver_specific": {} 00:16:08.646 }' 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:08.646 18:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:08.905 "name": "BaseBdev2", 00:16:08.905 "aliases": [ 00:16:08.905 "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5" 00:16:08.905 ], 00:16:08.905 "product_name": "Malloc disk", 00:16:08.905 "block_size": 512, 00:16:08.905 "num_blocks": 65536, 00:16:08.905 "uuid": "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5", 00:16:08.905 "assigned_rate_limits": { 00:16:08.905 "rw_ios_per_sec": 0, 00:16:08.905 "rw_mbytes_per_sec": 0, 00:16:08.905 "r_mbytes_per_sec": 0, 00:16:08.905 "w_mbytes_per_sec": 0 00:16:08.905 }, 00:16:08.905 "claimed": true, 00:16:08.905 "claim_type": "exclusive_write", 00:16:08.905 "zoned": false, 00:16:08.905 "supported_io_types": { 00:16:08.905 "read": true, 00:16:08.905 "write": true, 00:16:08.905 "unmap": true, 00:16:08.905 "flush": true, 00:16:08.905 "reset": true, 00:16:08.905 "nvme_admin": false, 00:16:08.905 "nvme_io": false, 00:16:08.905 "nvme_io_md": false, 00:16:08.905 "write_zeroes": true, 00:16:08.905 "zcopy": true, 00:16:08.905 "get_zone_info": false, 00:16:08.905 "zone_management": false, 00:16:08.905 "zone_append": false, 00:16:08.905 "compare": false, 00:16:08.905 "compare_and_write": false, 00:16:08.905 "abort": true, 00:16:08.905 "seek_hole": false, 00:16:08.905 "seek_data": false, 00:16:08.905 "copy": true, 00:16:08.905 "nvme_iov_md": false 00:16:08.905 }, 00:16:08.905 "memory_domains": [ 00:16:08.905 { 00:16:08.905 "dma_device_id": "system", 00:16:08.905 "dma_device_type": 1 00:16:08.905 }, 00:16:08.905 { 00:16:08.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.905 "dma_device_type": 2 00:16:08.905 } 00:16:08.905 ], 00:16:08.905 "driver_specific": {} 00:16:08.905 }' 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:08.905 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:09.163 "name": "BaseBdev3", 00:16:09.163 "aliases": [ 00:16:09.163 "3cf50534-42d8-11ef-9ade-d5fc5159efa5" 00:16:09.163 ], 00:16:09.163 "product_name": "Malloc disk", 00:16:09.163 "block_size": 512, 00:16:09.163 "num_blocks": 65536, 00:16:09.163 "uuid": "3cf50534-42d8-11ef-9ade-d5fc5159efa5", 00:16:09.163 "assigned_rate_limits": { 00:16:09.163 "rw_ios_per_sec": 0, 00:16:09.163 "rw_mbytes_per_sec": 0, 00:16:09.163 "r_mbytes_per_sec": 0, 00:16:09.163 "w_mbytes_per_sec": 0 00:16:09.163 }, 00:16:09.163 "claimed": true, 00:16:09.163 "claim_type": "exclusive_write", 00:16:09.163 "zoned": false, 00:16:09.163 "supported_io_types": { 00:16:09.163 "read": true, 00:16:09.163 "write": true, 00:16:09.163 "unmap": true, 00:16:09.163 "flush": true, 00:16:09.163 "reset": true, 00:16:09.163 "nvme_admin": false, 00:16:09.163 "nvme_io": false, 00:16:09.163 "nvme_io_md": false, 00:16:09.163 "write_zeroes": true, 00:16:09.163 "zcopy": true, 00:16:09.163 "get_zone_info": false, 00:16:09.163 "zone_management": false, 00:16:09.163 "zone_append": false, 00:16:09.163 "compare": false, 00:16:09.163 "compare_and_write": false, 00:16:09.163 "abort": true, 00:16:09.163 "seek_hole": false, 00:16:09.163 "seek_data": false, 00:16:09.163 "copy": true, 00:16:09.163 "nvme_iov_md": false 00:16:09.163 }, 00:16:09.163 "memory_domains": [ 00:16:09.163 { 00:16:09.163 "dma_device_id": "system", 00:16:09.163 "dma_device_type": 1 00:16:09.163 }, 00:16:09.163 { 00:16:09.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.163 "dma_device_type": 2 00:16:09.163 } 00:16:09.163 ], 00:16:09.163 "driver_specific": {} 00:16:09.163 }' 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:09.163 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:09.422 "name": "BaseBdev4", 00:16:09.422 "aliases": [ 00:16:09.422 "3dceeda6-42d8-11ef-9ade-d5fc5159efa5" 00:16:09.422 ], 00:16:09.422 "product_name": "Malloc disk", 00:16:09.422 "block_size": 512, 00:16:09.422 "num_blocks": 65536, 00:16:09.422 "uuid": "3dceeda6-42d8-11ef-9ade-d5fc5159efa5", 00:16:09.422 "assigned_rate_limits": { 00:16:09.422 "rw_ios_per_sec": 0, 00:16:09.422 "rw_mbytes_per_sec": 0, 00:16:09.422 "r_mbytes_per_sec": 0, 00:16:09.422 "w_mbytes_per_sec": 0 00:16:09.422 }, 00:16:09.422 "claimed": true, 00:16:09.422 "claim_type": "exclusive_write", 00:16:09.422 "zoned": false, 00:16:09.422 "supported_io_types": { 00:16:09.422 "read": true, 00:16:09.422 "write": true, 00:16:09.422 "unmap": true, 00:16:09.422 "flush": true, 00:16:09.422 "reset": true, 00:16:09.422 "nvme_admin": false, 00:16:09.422 "nvme_io": false, 00:16:09.422 "nvme_io_md": false, 00:16:09.422 "write_zeroes": true, 00:16:09.422 "zcopy": true, 00:16:09.422 "get_zone_info": false, 00:16:09.422 "zone_management": false, 00:16:09.422 "zone_append": false, 00:16:09.422 "compare": false, 00:16:09.422 "compare_and_write": false, 00:16:09.422 "abort": true, 00:16:09.422 "seek_hole": false, 00:16:09.422 "seek_data": false, 00:16:09.422 "copy": true, 00:16:09.422 "nvme_iov_md": false 00:16:09.422 }, 00:16:09.422 "memory_domains": [ 00:16:09.422 { 00:16:09.422 "dma_device_id": "system", 00:16:09.422 "dma_device_type": 1 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.422 "dma_device_type": 2 00:16:09.422 } 00:16:09.422 ], 00:16:09.422 "driver_specific": {} 00:16:09.422 }' 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:09.422 18:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:09.681 [2024-07-15 18:30:02.071765] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.939 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.940 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.940 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.198 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.198 "name": "Existed_Raid", 00:16:10.198 "uuid": "3dcef4e9-42d8-11ef-9ade-d5fc5159efa5", 00:16:10.198 "strip_size_kb": 0, 00:16:10.198 "state": "online", 00:16:10.198 "raid_level": "raid1", 00:16:10.198 "superblock": false, 00:16:10.198 "num_base_bdevs": 4, 00:16:10.198 "num_base_bdevs_discovered": 3, 00:16:10.198 "num_base_bdevs_operational": 3, 00:16:10.198 "base_bdevs_list": [ 00:16:10.198 { 00:16:10.198 "name": null, 00:16:10.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.198 "is_configured": false, 00:16:10.198 "data_offset": 0, 00:16:10.198 "data_size": 65536 00:16:10.198 }, 00:16:10.198 { 00:16:10.198 "name": "BaseBdev2", 00:16:10.198 "uuid": "3c2cce8c-42d8-11ef-9ade-d5fc5159efa5", 00:16:10.198 "is_configured": true, 00:16:10.198 "data_offset": 0, 00:16:10.198 "data_size": 65536 00:16:10.198 }, 00:16:10.198 { 00:16:10.198 "name": "BaseBdev3", 00:16:10.198 "uuid": "3cf50534-42d8-11ef-9ade-d5fc5159efa5", 00:16:10.198 "is_configured": true, 00:16:10.198 "data_offset": 0, 00:16:10.198 "data_size": 65536 00:16:10.198 }, 00:16:10.198 { 00:16:10.198 "name": "BaseBdev4", 00:16:10.198 "uuid": "3dceeda6-42d8-11ef-9ade-d5fc5159efa5", 00:16:10.198 "is_configured": true, 00:16:10.198 "data_offset": 0, 00:16:10.198 "data_size": 65536 00:16:10.198 } 00:16:10.198 ] 00:16:10.198 }' 00:16:10.198 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.198 18:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.456 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:10.456 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:10.456 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.456 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:10.715 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:10.715 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.715 18:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:10.972 [2024-07-15 18:30:03.201691] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.972 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:10.972 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:10.972 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.972 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:11.230 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:11.230 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:11.230 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:11.488 [2024-07-15 18:30:03.681806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:11.488 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:11.488 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:11.488 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.488 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:11.745 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:11.745 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:11.745 18:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:12.002 [2024-07-15 18:30:04.233875] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:12.002 [2024-07-15 18:30:04.233916] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.002 [2024-07-15 18:30:04.239904] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.002 [2024-07-15 18:30:04.239924] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.002 [2024-07-15 18:30:04.239929] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33ff33834a00 name Existed_Raid, state offline 00:16:12.002 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:12.002 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:12.002 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.002 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:12.274 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:12.274 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:12.274 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:12.274 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:12.274 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:12.274 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:12.548 BaseBdev2 00:16:12.548 18:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:12.548 18:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:12.548 18:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.548 18:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:12.548 18:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.548 18:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.548 18:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.806 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.065 [ 00:16:13.065 { 00:16:13.065 "name": "BaseBdev2", 00:16:13.065 "aliases": [ 00:16:13.065 "41436300-42d8-11ef-9ade-d5fc5159efa5" 00:16:13.065 ], 00:16:13.065 "product_name": "Malloc disk", 00:16:13.065 "block_size": 512, 00:16:13.065 "num_blocks": 65536, 00:16:13.065 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:13.065 "assigned_rate_limits": { 00:16:13.065 "rw_ios_per_sec": 0, 00:16:13.065 "rw_mbytes_per_sec": 0, 00:16:13.065 "r_mbytes_per_sec": 0, 00:16:13.065 "w_mbytes_per_sec": 0 00:16:13.065 }, 00:16:13.065 "claimed": false, 00:16:13.065 "zoned": false, 00:16:13.065 "supported_io_types": { 00:16:13.065 "read": true, 00:16:13.065 "write": true, 00:16:13.065 "unmap": true, 00:16:13.065 "flush": true, 00:16:13.065 "reset": true, 00:16:13.065 "nvme_admin": false, 00:16:13.065 "nvme_io": false, 00:16:13.065 "nvme_io_md": false, 00:16:13.065 "write_zeroes": true, 00:16:13.065 "zcopy": true, 00:16:13.065 "get_zone_info": false, 00:16:13.065 "zone_management": false, 00:16:13.065 "zone_append": false, 00:16:13.065 "compare": false, 00:16:13.065 "compare_and_write": false, 00:16:13.065 "abort": true, 00:16:13.065 "seek_hole": false, 00:16:13.065 "seek_data": false, 00:16:13.065 "copy": true, 00:16:13.065 "nvme_iov_md": false 00:16:13.065 }, 00:16:13.065 "memory_domains": [ 00:16:13.065 { 00:16:13.065 "dma_device_id": "system", 00:16:13.065 "dma_device_type": 1 00:16:13.065 }, 00:16:13.065 { 00:16:13.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.065 "dma_device_type": 2 00:16:13.065 } 00:16:13.065 ], 00:16:13.065 "driver_specific": {} 00:16:13.065 } 00:16:13.065 ] 00:16:13.065 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:13.065 18:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:13.065 18:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:13.065 18:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:13.323 BaseBdev3 00:16:13.323 18:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:13.323 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:13.323 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.323 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:13.323 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.323 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.323 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.581 18:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.839 [ 00:16:13.839 { 00:16:13.839 "name": "BaseBdev3", 00:16:13.839 "aliases": [ 00:16:13.839 "41b623ab-42d8-11ef-9ade-d5fc5159efa5" 00:16:13.839 ], 00:16:13.839 "product_name": "Malloc disk", 00:16:13.839 "block_size": 512, 00:16:13.839 "num_blocks": 65536, 00:16:13.839 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:13.839 "assigned_rate_limits": { 00:16:13.839 "rw_ios_per_sec": 0, 00:16:13.839 "rw_mbytes_per_sec": 0, 00:16:13.839 "r_mbytes_per_sec": 0, 00:16:13.839 "w_mbytes_per_sec": 0 00:16:13.839 }, 00:16:13.839 "claimed": false, 00:16:13.839 "zoned": false, 00:16:13.839 "supported_io_types": { 00:16:13.839 "read": true, 00:16:13.839 "write": true, 00:16:13.839 "unmap": true, 00:16:13.839 "flush": true, 00:16:13.839 "reset": true, 00:16:13.839 "nvme_admin": false, 00:16:13.839 "nvme_io": false, 00:16:13.839 "nvme_io_md": false, 00:16:13.839 "write_zeroes": true, 00:16:13.839 "zcopy": true, 00:16:13.839 "get_zone_info": false, 00:16:13.839 "zone_management": false, 00:16:13.839 "zone_append": false, 00:16:13.839 "compare": false, 00:16:13.839 "compare_and_write": false, 00:16:13.839 "abort": true, 00:16:13.839 "seek_hole": false, 00:16:13.839 "seek_data": false, 00:16:13.839 "copy": true, 00:16:13.839 "nvme_iov_md": false 00:16:13.839 }, 00:16:13.839 "memory_domains": [ 00:16:13.839 { 00:16:13.839 "dma_device_id": "system", 00:16:13.839 "dma_device_type": 1 00:16:13.839 }, 00:16:13.839 { 00:16:13.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.839 "dma_device_type": 2 00:16:13.839 } 00:16:13.839 ], 00:16:13.839 "driver_specific": {} 00:16:13.839 } 00:16:13.839 ] 00:16:13.839 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:13.839 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:13.839 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:13.839 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:14.097 BaseBdev4 00:16:14.097 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:14.097 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:14.097 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:14.097 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:14.097 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:14.097 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:14.097 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.355 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:14.614 [ 00:16:14.614 { 00:16:14.614 "name": "BaseBdev4", 00:16:14.614 "aliases": [ 00:16:14.615 "4231706f-42d8-11ef-9ade-d5fc5159efa5" 00:16:14.615 ], 00:16:14.615 "product_name": "Malloc disk", 00:16:14.615 "block_size": 512, 00:16:14.615 "num_blocks": 65536, 00:16:14.615 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:14.615 "assigned_rate_limits": { 00:16:14.615 "rw_ios_per_sec": 0, 00:16:14.615 "rw_mbytes_per_sec": 0, 00:16:14.615 "r_mbytes_per_sec": 0, 00:16:14.615 "w_mbytes_per_sec": 0 00:16:14.615 }, 00:16:14.615 "claimed": false, 00:16:14.615 "zoned": false, 00:16:14.615 "supported_io_types": { 00:16:14.615 "read": true, 00:16:14.615 "write": true, 00:16:14.615 "unmap": true, 00:16:14.615 "flush": true, 00:16:14.615 "reset": true, 00:16:14.615 "nvme_admin": false, 00:16:14.615 "nvme_io": false, 00:16:14.615 "nvme_io_md": false, 00:16:14.615 "write_zeroes": true, 00:16:14.615 "zcopy": true, 00:16:14.615 "get_zone_info": false, 00:16:14.615 "zone_management": false, 00:16:14.615 "zone_append": false, 00:16:14.615 "compare": false, 00:16:14.615 "compare_and_write": false, 00:16:14.615 "abort": true, 00:16:14.615 "seek_hole": false, 00:16:14.615 "seek_data": false, 00:16:14.615 "copy": true, 00:16:14.615 "nvme_iov_md": false 00:16:14.615 }, 00:16:14.615 "memory_domains": [ 00:16:14.615 { 00:16:14.615 "dma_device_id": "system", 00:16:14.615 "dma_device_type": 1 00:16:14.615 }, 00:16:14.615 { 00:16:14.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.615 "dma_device_type": 2 00:16:14.615 } 00:16:14.615 ], 00:16:14.615 "driver_specific": {} 00:16:14.615 } 00:16:14.615 ] 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:14.615 [2024-07-15 18:30:06.980065] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.615 [2024-07-15 18:30:06.980117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.615 [2024-07-15 18:30:06.980126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.615 [2024-07-15 18:30:06.980781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.615 [2024-07-15 18:30:06.980800] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.615 18:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.615 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.615 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.182 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.182 "name": "Existed_Raid", 00:16:15.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.182 "strip_size_kb": 0, 00:16:15.182 "state": "configuring", 00:16:15.182 "raid_level": "raid1", 00:16:15.182 "superblock": false, 00:16:15.182 "num_base_bdevs": 4, 00:16:15.182 "num_base_bdevs_discovered": 3, 00:16:15.182 "num_base_bdevs_operational": 4, 00:16:15.182 "base_bdevs_list": [ 00:16:15.182 { 00:16:15.182 "name": "BaseBdev1", 00:16:15.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.182 "is_configured": false, 00:16:15.182 "data_offset": 0, 00:16:15.182 "data_size": 0 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev2", 00:16:15.182 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 0, 00:16:15.182 "data_size": 65536 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev3", 00:16:15.182 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 0, 00:16:15.182 "data_size": 65536 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev4", 00:16:15.182 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 0, 00:16:15.182 "data_size": 65536 00:16:15.182 } 00:16:15.182 ] 00:16:15.182 }' 00:16:15.182 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.182 18:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.441 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:15.712 [2024-07-15 18:30:07.904131] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.712 18:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.972 18:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.972 "name": "Existed_Raid", 00:16:15.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.972 "strip_size_kb": 0, 00:16:15.972 "state": "configuring", 00:16:15.972 "raid_level": "raid1", 00:16:15.972 "superblock": false, 00:16:15.972 "num_base_bdevs": 4, 00:16:15.972 "num_base_bdevs_discovered": 2, 00:16:15.972 "num_base_bdevs_operational": 4, 00:16:15.972 "base_bdevs_list": [ 00:16:15.972 { 00:16:15.972 "name": "BaseBdev1", 00:16:15.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.972 "is_configured": false, 00:16:15.972 "data_offset": 0, 00:16:15.972 "data_size": 0 00:16:15.972 }, 00:16:15.972 { 00:16:15.972 "name": null, 00:16:15.972 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:15.972 "is_configured": false, 00:16:15.972 "data_offset": 0, 00:16:15.972 "data_size": 65536 00:16:15.972 }, 00:16:15.972 { 00:16:15.972 "name": "BaseBdev3", 00:16:15.972 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:15.972 "is_configured": true, 00:16:15.972 "data_offset": 0, 00:16:15.972 "data_size": 65536 00:16:15.972 }, 00:16:15.972 { 00:16:15.972 "name": "BaseBdev4", 00:16:15.972 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:15.972 "is_configured": true, 00:16:15.972 "data_offset": 0, 00:16:15.972 "data_size": 65536 00:16:15.972 } 00:16:15.972 ] 00:16:15.972 }' 00:16:15.972 18:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.972 18:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.231 18:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:16.231 18:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.797 18:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:16.797 18:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:17.056 [2024-07-15 18:30:09.204442] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.056 BaseBdev1 00:16:17.056 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:17.056 18:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:17.056 18:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:17.056 18:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:17.056 18:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:17.056 18:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:17.056 18:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:17.314 18:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.573 [ 00:16:17.573 { 00:16:17.573 "name": "BaseBdev1", 00:16:17.573 "aliases": [ 00:16:17.573 "43f179e5-42d8-11ef-9ade-d5fc5159efa5" 00:16:17.573 ], 00:16:17.573 "product_name": "Malloc disk", 00:16:17.573 "block_size": 512, 00:16:17.573 "num_blocks": 65536, 00:16:17.573 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:17.573 "assigned_rate_limits": { 00:16:17.573 "rw_ios_per_sec": 0, 00:16:17.573 "rw_mbytes_per_sec": 0, 00:16:17.573 "r_mbytes_per_sec": 0, 00:16:17.573 "w_mbytes_per_sec": 0 00:16:17.573 }, 00:16:17.573 "claimed": true, 00:16:17.573 "claim_type": "exclusive_write", 00:16:17.573 "zoned": false, 00:16:17.573 "supported_io_types": { 00:16:17.573 "read": true, 00:16:17.573 "write": true, 00:16:17.573 "unmap": true, 00:16:17.573 "flush": true, 00:16:17.573 "reset": true, 00:16:17.573 "nvme_admin": false, 00:16:17.573 "nvme_io": false, 00:16:17.573 "nvme_io_md": false, 00:16:17.573 "write_zeroes": true, 00:16:17.573 "zcopy": true, 00:16:17.573 "get_zone_info": false, 00:16:17.573 "zone_management": false, 00:16:17.573 "zone_append": false, 00:16:17.573 "compare": false, 00:16:17.573 "compare_and_write": false, 00:16:17.573 "abort": true, 00:16:17.573 "seek_hole": false, 00:16:17.573 "seek_data": false, 00:16:17.573 "copy": true, 00:16:17.573 "nvme_iov_md": false 00:16:17.573 }, 00:16:17.573 "memory_domains": [ 00:16:17.573 { 00:16:17.573 "dma_device_id": "system", 00:16:17.573 "dma_device_type": 1 00:16:17.573 }, 00:16:17.573 { 00:16:17.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.573 "dma_device_type": 2 00:16:17.573 } 00:16:17.573 ], 00:16:17.573 "driver_specific": {} 00:16:17.573 } 00:16:17.573 ] 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.573 18:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.835 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.835 "name": "Existed_Raid", 00:16:17.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.835 "strip_size_kb": 0, 00:16:17.835 "state": "configuring", 00:16:17.835 "raid_level": "raid1", 00:16:17.835 "superblock": false, 00:16:17.835 "num_base_bdevs": 4, 00:16:17.835 "num_base_bdevs_discovered": 3, 00:16:17.835 "num_base_bdevs_operational": 4, 00:16:17.835 "base_bdevs_list": [ 00:16:17.835 { 00:16:17.835 "name": "BaseBdev1", 00:16:17.835 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:17.835 "is_configured": true, 00:16:17.835 "data_offset": 0, 00:16:17.835 "data_size": 65536 00:16:17.835 }, 00:16:17.835 { 00:16:17.835 "name": null, 00:16:17.835 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:17.835 "is_configured": false, 00:16:17.835 "data_offset": 0, 00:16:17.835 "data_size": 65536 00:16:17.835 }, 00:16:17.835 { 00:16:17.835 "name": "BaseBdev3", 00:16:17.835 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:17.835 "is_configured": true, 00:16:17.835 "data_offset": 0, 00:16:17.835 "data_size": 65536 00:16:17.835 }, 00:16:17.835 { 00:16:17.835 "name": "BaseBdev4", 00:16:17.835 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:17.835 "is_configured": true, 00:16:17.835 "data_offset": 0, 00:16:17.835 "data_size": 65536 00:16:17.835 } 00:16:17.835 ] 00:16:17.835 }' 00:16:17.835 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.835 18:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.098 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:18.357 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:18.357 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:18.616 [2024-07-15 18:30:10.888385] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.616 18:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.876 18:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.876 "name": "Existed_Raid", 00:16:18.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.876 "strip_size_kb": 0, 00:16:18.876 "state": "configuring", 00:16:18.876 "raid_level": "raid1", 00:16:18.876 "superblock": false, 00:16:18.876 "num_base_bdevs": 4, 00:16:18.876 "num_base_bdevs_discovered": 2, 00:16:18.876 "num_base_bdevs_operational": 4, 00:16:18.876 "base_bdevs_list": [ 00:16:18.876 { 00:16:18.876 "name": "BaseBdev1", 00:16:18.876 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:18.876 "is_configured": true, 00:16:18.876 "data_offset": 0, 00:16:18.876 "data_size": 65536 00:16:18.876 }, 00:16:18.876 { 00:16:18.876 "name": null, 00:16:18.876 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:18.876 "is_configured": false, 00:16:18.876 "data_offset": 0, 00:16:18.876 "data_size": 65536 00:16:18.876 }, 00:16:18.876 { 00:16:18.876 "name": null, 00:16:18.876 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:18.876 "is_configured": false, 00:16:18.876 "data_offset": 0, 00:16:18.876 "data_size": 65536 00:16:18.876 }, 00:16:18.876 { 00:16:18.876 "name": "BaseBdev4", 00:16:18.876 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:18.876 "is_configured": true, 00:16:18.876 "data_offset": 0, 00:16:18.876 "data_size": 65536 00:16:18.876 } 00:16:18.876 ] 00:16:18.876 }' 00:16:18.876 18:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.876 18:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.134 18:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.134 18:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:19.393 18:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:19.393 18:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:19.652 [2024-07-15 18:30:12.020498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.652 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.220 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.220 "name": "Existed_Raid", 00:16:20.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.221 "strip_size_kb": 0, 00:16:20.221 "state": "configuring", 00:16:20.221 "raid_level": "raid1", 00:16:20.221 "superblock": false, 00:16:20.221 "num_base_bdevs": 4, 00:16:20.221 "num_base_bdevs_discovered": 3, 00:16:20.221 "num_base_bdevs_operational": 4, 00:16:20.221 "base_bdevs_list": [ 00:16:20.221 { 00:16:20.221 "name": "BaseBdev1", 00:16:20.221 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:20.221 "is_configured": true, 00:16:20.221 "data_offset": 0, 00:16:20.221 "data_size": 65536 00:16:20.221 }, 00:16:20.221 { 00:16:20.221 "name": null, 00:16:20.221 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:20.221 "is_configured": false, 00:16:20.221 "data_offset": 0, 00:16:20.221 "data_size": 65536 00:16:20.221 }, 00:16:20.221 { 00:16:20.221 "name": "BaseBdev3", 00:16:20.221 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:20.221 "is_configured": true, 00:16:20.221 "data_offset": 0, 00:16:20.221 "data_size": 65536 00:16:20.221 }, 00:16:20.221 { 00:16:20.221 "name": "BaseBdev4", 00:16:20.221 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:20.221 "is_configured": true, 00:16:20.221 "data_offset": 0, 00:16:20.221 "data_size": 65536 00:16:20.221 } 00:16:20.221 ] 00:16:20.221 }' 00:16:20.221 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.221 18:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.479 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.479 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:20.738 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:20.738 18:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:20.997 [2024-07-15 18:30:13.184702] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.997 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.255 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.255 "name": "Existed_Raid", 00:16:21.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.255 "strip_size_kb": 0, 00:16:21.255 "state": "configuring", 00:16:21.255 "raid_level": "raid1", 00:16:21.255 "superblock": false, 00:16:21.255 "num_base_bdevs": 4, 00:16:21.255 "num_base_bdevs_discovered": 2, 00:16:21.255 "num_base_bdevs_operational": 4, 00:16:21.255 "base_bdevs_list": [ 00:16:21.255 { 00:16:21.255 "name": null, 00:16:21.255 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:21.255 "is_configured": false, 00:16:21.255 "data_offset": 0, 00:16:21.255 "data_size": 65536 00:16:21.255 }, 00:16:21.255 { 00:16:21.255 "name": null, 00:16:21.255 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:21.255 "is_configured": false, 00:16:21.255 "data_offset": 0, 00:16:21.255 "data_size": 65536 00:16:21.255 }, 00:16:21.255 { 00:16:21.255 "name": "BaseBdev3", 00:16:21.255 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:21.255 "is_configured": true, 00:16:21.255 "data_offset": 0, 00:16:21.255 "data_size": 65536 00:16:21.255 }, 00:16:21.255 { 00:16:21.255 "name": "BaseBdev4", 00:16:21.255 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:21.255 "is_configured": true, 00:16:21.255 "data_offset": 0, 00:16:21.255 "data_size": 65536 00:16:21.255 } 00:16:21.255 ] 00:16:21.255 }' 00:16:21.255 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.255 18:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.513 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.513 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:21.772 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:21.772 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:22.029 [2024-07-15 18:30:14.373062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.029 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.595 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.595 "name": "Existed_Raid", 00:16:22.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.595 "strip_size_kb": 0, 00:16:22.596 "state": "configuring", 00:16:22.596 "raid_level": "raid1", 00:16:22.596 "superblock": false, 00:16:22.596 "num_base_bdevs": 4, 00:16:22.596 "num_base_bdevs_discovered": 3, 00:16:22.596 "num_base_bdevs_operational": 4, 00:16:22.596 "base_bdevs_list": [ 00:16:22.596 { 00:16:22.596 "name": null, 00:16:22.596 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:22.596 "is_configured": false, 00:16:22.596 "data_offset": 0, 00:16:22.596 "data_size": 65536 00:16:22.596 }, 00:16:22.596 { 00:16:22.596 "name": "BaseBdev2", 00:16:22.596 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:22.596 "is_configured": true, 00:16:22.596 "data_offset": 0, 00:16:22.596 "data_size": 65536 00:16:22.596 }, 00:16:22.596 { 00:16:22.596 "name": "BaseBdev3", 00:16:22.596 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:22.596 "is_configured": true, 00:16:22.596 "data_offset": 0, 00:16:22.596 "data_size": 65536 00:16:22.596 }, 00:16:22.596 { 00:16:22.596 "name": "BaseBdev4", 00:16:22.596 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:22.596 "is_configured": true, 00:16:22.596 "data_offset": 0, 00:16:22.596 "data_size": 65536 00:16:22.596 } 00:16:22.596 ] 00:16:22.596 }' 00:16:22.596 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.596 18:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.853 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.853 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:23.111 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:23.111 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.111 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:23.370 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 43f179e5-42d8-11ef-9ade-d5fc5159efa5 00:16:23.627 [2024-07-15 18:30:15.845338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:23.627 [2024-07-15 18:30:15.845372] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33ff33834f00 00:16:23.627 [2024-07-15 18:30:15.845376] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:23.627 [2024-07-15 18:30:15.845410] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33ff33897e20 00:16:23.627 [2024-07-15 18:30:15.845502] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33ff33834f00 00:16:23.627 [2024-07-15 18:30:15.845507] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x33ff33834f00 00:16:23.627 [2024-07-15 18:30:15.845551] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.627 NewBaseBdev 00:16:23.627 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:23.627 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:23.627 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:23.627 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:23.627 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:23.627 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:23.627 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.884 18:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:24.141 [ 00:16:24.141 { 00:16:24.141 "name": "NewBaseBdev", 00:16:24.141 "aliases": [ 00:16:24.141 "43f179e5-42d8-11ef-9ade-d5fc5159efa5" 00:16:24.141 ], 00:16:24.141 "product_name": "Malloc disk", 00:16:24.141 "block_size": 512, 00:16:24.141 "num_blocks": 65536, 00:16:24.141 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.141 "assigned_rate_limits": { 00:16:24.141 "rw_ios_per_sec": 0, 00:16:24.141 "rw_mbytes_per_sec": 0, 00:16:24.141 "r_mbytes_per_sec": 0, 00:16:24.141 "w_mbytes_per_sec": 0 00:16:24.141 }, 00:16:24.141 "claimed": true, 00:16:24.141 "claim_type": "exclusive_write", 00:16:24.141 "zoned": false, 00:16:24.141 "supported_io_types": { 00:16:24.141 "read": true, 00:16:24.141 "write": true, 00:16:24.141 "unmap": true, 00:16:24.141 "flush": true, 00:16:24.141 "reset": true, 00:16:24.141 "nvme_admin": false, 00:16:24.141 "nvme_io": false, 00:16:24.141 "nvme_io_md": false, 00:16:24.141 "write_zeroes": true, 00:16:24.141 "zcopy": true, 00:16:24.141 "get_zone_info": false, 00:16:24.141 "zone_management": false, 00:16:24.141 "zone_append": false, 00:16:24.141 "compare": false, 00:16:24.141 "compare_and_write": false, 00:16:24.141 "abort": true, 00:16:24.141 "seek_hole": false, 00:16:24.141 "seek_data": false, 00:16:24.141 "copy": true, 00:16:24.141 "nvme_iov_md": false 00:16:24.141 }, 00:16:24.141 "memory_domains": [ 00:16:24.141 { 00:16:24.141 "dma_device_id": "system", 00:16:24.141 "dma_device_type": 1 00:16:24.141 }, 00:16:24.141 { 00:16:24.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.141 "dma_device_type": 2 00:16:24.141 } 00:16:24.141 ], 00:16:24.141 "driver_specific": {} 00:16:24.141 } 00:16:24.141 ] 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.141 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.399 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.399 "name": "Existed_Raid", 00:16:24.399 "uuid": "47e6d4b5-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.399 "strip_size_kb": 0, 00:16:24.399 "state": "online", 00:16:24.399 "raid_level": "raid1", 00:16:24.399 "superblock": false, 00:16:24.399 "num_base_bdevs": 4, 00:16:24.399 "num_base_bdevs_discovered": 4, 00:16:24.399 "num_base_bdevs_operational": 4, 00:16:24.399 "base_bdevs_list": [ 00:16:24.399 { 00:16:24.399 "name": "NewBaseBdev", 00:16:24.399 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.399 "is_configured": true, 00:16:24.399 "data_offset": 0, 00:16:24.399 "data_size": 65536 00:16:24.399 }, 00:16:24.399 { 00:16:24.399 "name": "BaseBdev2", 00:16:24.399 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.399 "is_configured": true, 00:16:24.399 "data_offset": 0, 00:16:24.399 "data_size": 65536 00:16:24.399 }, 00:16:24.399 { 00:16:24.399 "name": "BaseBdev3", 00:16:24.399 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.399 "is_configured": true, 00:16:24.399 "data_offset": 0, 00:16:24.399 "data_size": 65536 00:16:24.399 }, 00:16:24.399 { 00:16:24.399 "name": "BaseBdev4", 00:16:24.399 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.399 "is_configured": true, 00:16:24.399 "data_offset": 0, 00:16:24.399 "data_size": 65536 00:16:24.399 } 00:16:24.399 ] 00:16:24.399 }' 00:16:24.399 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.399 18:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.657 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:24.657 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:24.657 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:24.657 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:24.657 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:24.657 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:24.657 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:24.657 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:24.916 [2024-07-15 18:30:17.153333] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.916 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:24.916 "name": "Existed_Raid", 00:16:24.916 "aliases": [ 00:16:24.916 "47e6d4b5-42d8-11ef-9ade-d5fc5159efa5" 00:16:24.916 ], 00:16:24.916 "product_name": "Raid Volume", 00:16:24.916 "block_size": 512, 00:16:24.916 "num_blocks": 65536, 00:16:24.916 "uuid": "47e6d4b5-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.916 "assigned_rate_limits": { 00:16:24.916 "rw_ios_per_sec": 0, 00:16:24.916 "rw_mbytes_per_sec": 0, 00:16:24.916 "r_mbytes_per_sec": 0, 00:16:24.916 "w_mbytes_per_sec": 0 00:16:24.916 }, 00:16:24.916 "claimed": false, 00:16:24.916 "zoned": false, 00:16:24.916 "supported_io_types": { 00:16:24.916 "read": true, 00:16:24.916 "write": true, 00:16:24.916 "unmap": false, 00:16:24.916 "flush": false, 00:16:24.916 "reset": true, 00:16:24.916 "nvme_admin": false, 00:16:24.916 "nvme_io": false, 00:16:24.916 "nvme_io_md": false, 00:16:24.916 "write_zeroes": true, 00:16:24.916 "zcopy": false, 00:16:24.916 "get_zone_info": false, 00:16:24.916 "zone_management": false, 00:16:24.916 "zone_append": false, 00:16:24.916 "compare": false, 00:16:24.916 "compare_and_write": false, 00:16:24.916 "abort": false, 00:16:24.916 "seek_hole": false, 00:16:24.916 "seek_data": false, 00:16:24.916 "copy": false, 00:16:24.916 "nvme_iov_md": false 00:16:24.916 }, 00:16:24.916 "memory_domains": [ 00:16:24.916 { 00:16:24.916 "dma_device_id": "system", 00:16:24.916 "dma_device_type": 1 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.916 "dma_device_type": 2 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "dma_device_id": "system", 00:16:24.916 "dma_device_type": 1 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.916 "dma_device_type": 2 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "dma_device_id": "system", 00:16:24.916 "dma_device_type": 1 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.916 "dma_device_type": 2 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "dma_device_id": "system", 00:16:24.916 "dma_device_type": 1 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.916 "dma_device_type": 2 00:16:24.916 } 00:16:24.916 ], 00:16:24.916 "driver_specific": { 00:16:24.916 "raid": { 00:16:24.916 "uuid": "47e6d4b5-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.916 "strip_size_kb": 0, 00:16:24.916 "state": "online", 00:16:24.916 "raid_level": "raid1", 00:16:24.916 "superblock": false, 00:16:24.916 "num_base_bdevs": 4, 00:16:24.916 "num_base_bdevs_discovered": 4, 00:16:24.916 "num_base_bdevs_operational": 4, 00:16:24.916 "base_bdevs_list": [ 00:16:24.916 { 00:16:24.916 "name": "NewBaseBdev", 00:16:24.916 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.916 "is_configured": true, 00:16:24.916 "data_offset": 0, 00:16:24.916 "data_size": 65536 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "name": "BaseBdev2", 00:16:24.916 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.916 "is_configured": true, 00:16:24.916 "data_offset": 0, 00:16:24.916 "data_size": 65536 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "name": "BaseBdev3", 00:16:24.916 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.916 "is_configured": true, 00:16:24.916 "data_offset": 0, 00:16:24.916 "data_size": 65536 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "name": "BaseBdev4", 00:16:24.916 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:24.916 "is_configured": true, 00:16:24.916 "data_offset": 0, 00:16:24.916 "data_size": 65536 00:16:24.916 } 00:16:24.916 ] 00:16:24.916 } 00:16:24.916 } 00:16:24.916 }' 00:16:24.916 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.916 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:24.916 BaseBdev2 00:16:24.916 BaseBdev3 00:16:24.916 BaseBdev4' 00:16:24.916 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.916 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:24.916 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.174 "name": "NewBaseBdev", 00:16:25.174 "aliases": [ 00:16:25.174 "43f179e5-42d8-11ef-9ade-d5fc5159efa5" 00:16:25.174 ], 00:16:25.174 "product_name": "Malloc disk", 00:16:25.174 "block_size": 512, 00:16:25.174 "num_blocks": 65536, 00:16:25.174 "uuid": "43f179e5-42d8-11ef-9ade-d5fc5159efa5", 00:16:25.174 "assigned_rate_limits": { 00:16:25.174 "rw_ios_per_sec": 0, 00:16:25.174 "rw_mbytes_per_sec": 0, 00:16:25.174 "r_mbytes_per_sec": 0, 00:16:25.174 "w_mbytes_per_sec": 0 00:16:25.174 }, 00:16:25.174 "claimed": true, 00:16:25.174 "claim_type": "exclusive_write", 00:16:25.174 "zoned": false, 00:16:25.174 "supported_io_types": { 00:16:25.174 "read": true, 00:16:25.174 "write": true, 00:16:25.174 "unmap": true, 00:16:25.174 "flush": true, 00:16:25.174 "reset": true, 00:16:25.174 "nvme_admin": false, 00:16:25.174 "nvme_io": false, 00:16:25.174 "nvme_io_md": false, 00:16:25.174 "write_zeroes": true, 00:16:25.174 "zcopy": true, 00:16:25.174 "get_zone_info": false, 00:16:25.174 "zone_management": false, 00:16:25.174 "zone_append": false, 00:16:25.174 "compare": false, 00:16:25.174 "compare_and_write": false, 00:16:25.174 "abort": true, 00:16:25.174 "seek_hole": false, 00:16:25.174 "seek_data": false, 00:16:25.174 "copy": true, 00:16:25.174 "nvme_iov_md": false 00:16:25.174 }, 00:16:25.174 "memory_domains": [ 00:16:25.174 { 00:16:25.174 "dma_device_id": "system", 00:16:25.174 "dma_device_type": 1 00:16:25.174 }, 00:16:25.174 { 00:16:25.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.174 "dma_device_type": 2 00:16:25.174 } 00:16:25.174 ], 00:16:25.174 "driver_specific": {} 00:16:25.174 }' 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:25.174 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.433 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.433 "name": "BaseBdev2", 00:16:25.433 "aliases": [ 00:16:25.433 "41436300-42d8-11ef-9ade-d5fc5159efa5" 00:16:25.433 ], 00:16:25.433 "product_name": "Malloc disk", 00:16:25.433 "block_size": 512, 00:16:25.433 "num_blocks": 65536, 00:16:25.433 "uuid": "41436300-42d8-11ef-9ade-d5fc5159efa5", 00:16:25.433 "assigned_rate_limits": { 00:16:25.433 "rw_ios_per_sec": 0, 00:16:25.433 "rw_mbytes_per_sec": 0, 00:16:25.433 "r_mbytes_per_sec": 0, 00:16:25.433 "w_mbytes_per_sec": 0 00:16:25.433 }, 00:16:25.433 "claimed": true, 00:16:25.433 "claim_type": "exclusive_write", 00:16:25.433 "zoned": false, 00:16:25.433 "supported_io_types": { 00:16:25.433 "read": true, 00:16:25.433 "write": true, 00:16:25.433 "unmap": true, 00:16:25.433 "flush": true, 00:16:25.433 "reset": true, 00:16:25.433 "nvme_admin": false, 00:16:25.433 "nvme_io": false, 00:16:25.433 "nvme_io_md": false, 00:16:25.433 "write_zeroes": true, 00:16:25.433 "zcopy": true, 00:16:25.433 "get_zone_info": false, 00:16:25.433 "zone_management": false, 00:16:25.433 "zone_append": false, 00:16:25.433 "compare": false, 00:16:25.433 "compare_and_write": false, 00:16:25.433 "abort": true, 00:16:25.433 "seek_hole": false, 00:16:25.433 "seek_data": false, 00:16:25.433 "copy": true, 00:16:25.434 "nvme_iov_md": false 00:16:25.434 }, 00:16:25.434 "memory_domains": [ 00:16:25.434 { 00:16:25.434 "dma_device_id": "system", 00:16:25.434 "dma_device_type": 1 00:16:25.434 }, 00:16:25.434 { 00:16:25.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.434 "dma_device_type": 2 00:16:25.434 } 00:16:25.434 ], 00:16:25.434 "driver_specific": {} 00:16:25.434 }' 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.434 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.752 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.752 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.752 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.752 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:25.752 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.026 "name": "BaseBdev3", 00:16:26.026 "aliases": [ 00:16:26.026 "41b623ab-42d8-11ef-9ade-d5fc5159efa5" 00:16:26.026 ], 00:16:26.026 "product_name": "Malloc disk", 00:16:26.026 "block_size": 512, 00:16:26.026 "num_blocks": 65536, 00:16:26.026 "uuid": "41b623ab-42d8-11ef-9ade-d5fc5159efa5", 00:16:26.026 "assigned_rate_limits": { 00:16:26.026 "rw_ios_per_sec": 0, 00:16:26.026 "rw_mbytes_per_sec": 0, 00:16:26.026 "r_mbytes_per_sec": 0, 00:16:26.026 "w_mbytes_per_sec": 0 00:16:26.026 }, 00:16:26.026 "claimed": true, 00:16:26.026 "claim_type": "exclusive_write", 00:16:26.026 "zoned": false, 00:16:26.026 "supported_io_types": { 00:16:26.026 "read": true, 00:16:26.026 "write": true, 00:16:26.026 "unmap": true, 00:16:26.026 "flush": true, 00:16:26.026 "reset": true, 00:16:26.026 "nvme_admin": false, 00:16:26.026 "nvme_io": false, 00:16:26.026 "nvme_io_md": false, 00:16:26.026 "write_zeroes": true, 00:16:26.026 "zcopy": true, 00:16:26.026 "get_zone_info": false, 00:16:26.026 "zone_management": false, 00:16:26.026 "zone_append": false, 00:16:26.026 "compare": false, 00:16:26.026 "compare_and_write": false, 00:16:26.026 "abort": true, 00:16:26.026 "seek_hole": false, 00:16:26.026 "seek_data": false, 00:16:26.026 "copy": true, 00:16:26.026 "nvme_iov_md": false 00:16:26.026 }, 00:16:26.026 "memory_domains": [ 00:16:26.026 { 00:16:26.026 "dma_device_id": "system", 00:16:26.026 "dma_device_type": 1 00:16:26.026 }, 00:16:26.026 { 00:16:26.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.026 "dma_device_type": 2 00:16:26.026 } 00:16:26.026 ], 00:16:26.026 "driver_specific": {} 00:16:26.026 }' 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.026 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.027 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.027 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:26.027 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:26.027 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.285 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.285 "name": "BaseBdev4", 00:16:26.285 "aliases": [ 00:16:26.285 "4231706f-42d8-11ef-9ade-d5fc5159efa5" 00:16:26.285 ], 00:16:26.285 "product_name": "Malloc disk", 00:16:26.285 "block_size": 512, 00:16:26.285 "num_blocks": 65536, 00:16:26.285 "uuid": "4231706f-42d8-11ef-9ade-d5fc5159efa5", 00:16:26.285 "assigned_rate_limits": { 00:16:26.285 "rw_ios_per_sec": 0, 00:16:26.285 "rw_mbytes_per_sec": 0, 00:16:26.285 "r_mbytes_per_sec": 0, 00:16:26.285 "w_mbytes_per_sec": 0 00:16:26.285 }, 00:16:26.285 "claimed": true, 00:16:26.285 "claim_type": "exclusive_write", 00:16:26.285 "zoned": false, 00:16:26.285 "supported_io_types": { 00:16:26.285 "read": true, 00:16:26.285 "write": true, 00:16:26.285 "unmap": true, 00:16:26.285 "flush": true, 00:16:26.285 "reset": true, 00:16:26.285 "nvme_admin": false, 00:16:26.285 "nvme_io": false, 00:16:26.285 "nvme_io_md": false, 00:16:26.285 "write_zeroes": true, 00:16:26.285 "zcopy": true, 00:16:26.285 "get_zone_info": false, 00:16:26.285 "zone_management": false, 00:16:26.285 "zone_append": false, 00:16:26.285 "compare": false, 00:16:26.285 "compare_and_write": false, 00:16:26.285 "abort": true, 00:16:26.285 "seek_hole": false, 00:16:26.285 "seek_data": false, 00:16:26.285 "copy": true, 00:16:26.285 "nvme_iov_md": false 00:16:26.285 }, 00:16:26.285 "memory_domains": [ 00:16:26.285 { 00:16:26.285 "dma_device_id": "system", 00:16:26.285 "dma_device_type": 1 00:16:26.285 }, 00:16:26.285 { 00:16:26.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.285 "dma_device_type": 2 00:16:26.285 } 00:16:26.285 ], 00:16:26.285 "driver_specific": {} 00:16:26.285 }' 00:16:26.285 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.285 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.285 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.285 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.285 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.285 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.285 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.286 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.286 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.286 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.286 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.286 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.286 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:26.546 [2024-07-15 18:30:18.769399] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.546 [2024-07-15 18:30:18.769431] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.546 [2024-07-15 18:30:18.769458] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.546 [2024-07-15 18:30:18.769543] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.546 [2024-07-15 18:30:18.769548] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33ff33834f00 name Existed_Raid, state offline 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 63055 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 63055 ']' 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 63055 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 63055 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:26.546 killing process with pid 63055 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63055' 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 63055 00:16:26.546 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 63055 00:16:26.546 [2024-07-15 18:30:18.797683] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.546 [2024-07-15 18:30:18.831979] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.805 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:26.805 ************************************ 00:16:26.805 END TEST raid_state_function_test 00:16:26.805 ************************************ 00:16:26.805 00:16:26.805 real 0m28.057s 00:16:26.805 user 0m51.218s 00:16:26.805 sys 0m3.984s 00:16:26.805 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.805 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.805 18:30:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:26.805 18:30:19 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:26.805 18:30:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:26.805 18:30:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.805 18:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.806 ************************************ 00:16:26.806 START TEST raid_state_function_test_sb 00:16:26.806 ************************************ 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=63874 00:16:26.806 Process raid pid: 63874 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63874' 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 63874 /var/tmp/spdk-raid.sock 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 63874 ']' 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.806 18:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.806 [2024-07-15 18:30:19.100009] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:16:26.806 [2024-07-15 18:30:19.100271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:27.384 EAL: TSC is not safe to use in SMP mode 00:16:27.384 EAL: TSC is not invariant 00:16:27.384 [2024-07-15 18:30:19.708060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.648 [2024-07-15 18:30:19.794552] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:27.648 [2024-07-15 18:30:19.797132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.648 [2024-07-15 18:30:19.798013] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.648 [2024-07-15 18:30:19.798028] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.906 18:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.906 18:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:27.906 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:28.164 [2024-07-15 18:30:20.485989] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.164 [2024-07-15 18:30:20.486066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.164 [2024-07-15 18:30:20.486072] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.164 [2024-07-15 18:30:20.486080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.164 [2024-07-15 18:30:20.486083] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.164 [2024-07-15 18:30:20.486090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.164 [2024-07-15 18:30:20.486093] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:28.164 [2024-07-15 18:30:20.486100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.164 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.422 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.422 "name": "Existed_Raid", 00:16:28.423 "uuid": "4aaaed81-42d8-11ef-9ade-d5fc5159efa5", 00:16:28.423 "strip_size_kb": 0, 00:16:28.423 "state": "configuring", 00:16:28.423 "raid_level": "raid1", 00:16:28.423 "superblock": true, 00:16:28.423 "num_base_bdevs": 4, 00:16:28.423 "num_base_bdevs_discovered": 0, 00:16:28.423 "num_base_bdevs_operational": 4, 00:16:28.423 "base_bdevs_list": [ 00:16:28.423 { 00:16:28.423 "name": "BaseBdev1", 00:16:28.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.423 "is_configured": false, 00:16:28.423 "data_offset": 0, 00:16:28.423 "data_size": 0 00:16:28.423 }, 00:16:28.423 { 00:16:28.423 "name": "BaseBdev2", 00:16:28.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.423 "is_configured": false, 00:16:28.423 "data_offset": 0, 00:16:28.423 "data_size": 0 00:16:28.423 }, 00:16:28.423 { 00:16:28.423 "name": "BaseBdev3", 00:16:28.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.423 "is_configured": false, 00:16:28.423 "data_offset": 0, 00:16:28.423 "data_size": 0 00:16:28.423 }, 00:16:28.423 { 00:16:28.423 "name": "BaseBdev4", 00:16:28.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.423 "is_configured": false, 00:16:28.423 "data_offset": 0, 00:16:28.423 "data_size": 0 00:16:28.423 } 00:16:28.423 ] 00:16:28.423 }' 00:16:28.423 18:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.423 18:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.989 18:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:28.989 [2024-07-15 18:30:21.342091] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.989 [2024-07-15 18:30:21.342131] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x76c59434500 name Existed_Raid, state configuring 00:16:28.989 18:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:29.278 [2024-07-15 18:30:21.586126] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.278 [2024-07-15 18:30:21.586193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.278 [2024-07-15 18:30:21.586198] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.278 [2024-07-15 18:30:21.586206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.278 [2024-07-15 18:30:21.586210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.278 [2024-07-15 18:30:21.586217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.278 [2024-07-15 18:30:21.586220] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:29.278 [2024-07-15 18:30:21.586227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:29.278 18:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.536 [2024-07-15 18:30:21.875363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.536 BaseBdev1 00:16:29.536 18:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:29.536 18:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:29.536 18:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:29.536 18:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:29.536 18:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:29.536 18:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:29.536 18:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.793 18:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.052 [ 00:16:30.052 { 00:16:30.052 "name": "BaseBdev1", 00:16:30.052 "aliases": [ 00:16:30.052 "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5" 00:16:30.052 ], 00:16:30.052 "product_name": "Malloc disk", 00:16:30.052 "block_size": 512, 00:16:30.052 "num_blocks": 65536, 00:16:30.052 "uuid": "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5", 00:16:30.052 "assigned_rate_limits": { 00:16:30.052 "rw_ios_per_sec": 0, 00:16:30.052 "rw_mbytes_per_sec": 0, 00:16:30.052 "r_mbytes_per_sec": 0, 00:16:30.052 "w_mbytes_per_sec": 0 00:16:30.052 }, 00:16:30.052 "claimed": true, 00:16:30.052 "claim_type": "exclusive_write", 00:16:30.052 "zoned": false, 00:16:30.052 "supported_io_types": { 00:16:30.052 "read": true, 00:16:30.052 "write": true, 00:16:30.052 "unmap": true, 00:16:30.052 "flush": true, 00:16:30.052 "reset": true, 00:16:30.052 "nvme_admin": false, 00:16:30.052 "nvme_io": false, 00:16:30.052 "nvme_io_md": false, 00:16:30.052 "write_zeroes": true, 00:16:30.052 "zcopy": true, 00:16:30.052 "get_zone_info": false, 00:16:30.052 "zone_management": false, 00:16:30.052 "zone_append": false, 00:16:30.052 "compare": false, 00:16:30.052 "compare_and_write": false, 00:16:30.052 "abort": true, 00:16:30.052 "seek_hole": false, 00:16:30.052 "seek_data": false, 00:16:30.052 "copy": true, 00:16:30.052 "nvme_iov_md": false 00:16:30.052 }, 00:16:30.052 "memory_domains": [ 00:16:30.052 { 00:16:30.052 "dma_device_id": "system", 00:16:30.052 "dma_device_type": 1 00:16:30.052 }, 00:16:30.052 { 00:16:30.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.052 "dma_device_type": 2 00:16:30.052 } 00:16:30.052 ], 00:16:30.052 "driver_specific": {} 00:16:30.052 } 00:16:30.052 ] 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.052 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.311 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.311 "name": "Existed_Raid", 00:16:30.311 "uuid": "4b52cbc3-42d8-11ef-9ade-d5fc5159efa5", 00:16:30.311 "strip_size_kb": 0, 00:16:30.311 "state": "configuring", 00:16:30.311 "raid_level": "raid1", 00:16:30.311 "superblock": true, 00:16:30.311 "num_base_bdevs": 4, 00:16:30.311 "num_base_bdevs_discovered": 1, 00:16:30.311 "num_base_bdevs_operational": 4, 00:16:30.311 "base_bdevs_list": [ 00:16:30.311 { 00:16:30.311 "name": "BaseBdev1", 00:16:30.311 "uuid": "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5", 00:16:30.311 "is_configured": true, 00:16:30.311 "data_offset": 2048, 00:16:30.311 "data_size": 63488 00:16:30.311 }, 00:16:30.311 { 00:16:30.311 "name": "BaseBdev2", 00:16:30.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.311 "is_configured": false, 00:16:30.311 "data_offset": 0, 00:16:30.311 "data_size": 0 00:16:30.311 }, 00:16:30.311 { 00:16:30.311 "name": "BaseBdev3", 00:16:30.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.311 "is_configured": false, 00:16:30.311 "data_offset": 0, 00:16:30.311 "data_size": 0 00:16:30.311 }, 00:16:30.311 { 00:16:30.311 "name": "BaseBdev4", 00:16:30.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.311 "is_configured": false, 00:16:30.311 "data_offset": 0, 00:16:30.311 "data_size": 0 00:16:30.311 } 00:16:30.311 ] 00:16:30.311 }' 00:16:30.311 18:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.311 18:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.875 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:30.875 [2024-07-15 18:30:23.270327] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.875 [2024-07-15 18:30:23.270374] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x76c59434500 name Existed_Raid, state configuring 00:16:31.133 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:31.133 [2024-07-15 18:30:23.518370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.133 [2024-07-15 18:30:23.519330] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.133 [2024-07-15 18:30:23.519390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.133 [2024-07-15 18:30:23.519395] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.133 [2024-07-15 18:30:23.519404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.133 [2024-07-15 18:30:23.519408] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:31.133 [2024-07-15 18:30:23.519414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.391 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.650 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.650 "name": "Existed_Raid", 00:16:31.650 "uuid": "4c79a1d0-42d8-11ef-9ade-d5fc5159efa5", 00:16:31.650 "strip_size_kb": 0, 00:16:31.650 "state": "configuring", 00:16:31.650 "raid_level": "raid1", 00:16:31.650 "superblock": true, 00:16:31.650 "num_base_bdevs": 4, 00:16:31.650 "num_base_bdevs_discovered": 1, 00:16:31.650 "num_base_bdevs_operational": 4, 00:16:31.650 "base_bdevs_list": [ 00:16:31.650 { 00:16:31.650 "name": "BaseBdev1", 00:16:31.650 "uuid": "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5", 00:16:31.650 "is_configured": true, 00:16:31.650 "data_offset": 2048, 00:16:31.650 "data_size": 63488 00:16:31.650 }, 00:16:31.650 { 00:16:31.650 "name": "BaseBdev2", 00:16:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.650 "is_configured": false, 00:16:31.650 "data_offset": 0, 00:16:31.650 "data_size": 0 00:16:31.650 }, 00:16:31.650 { 00:16:31.650 "name": "BaseBdev3", 00:16:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.650 "is_configured": false, 00:16:31.650 "data_offset": 0, 00:16:31.650 "data_size": 0 00:16:31.650 }, 00:16:31.650 { 00:16:31.650 "name": "BaseBdev4", 00:16:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.650 "is_configured": false, 00:16:31.650 "data_offset": 0, 00:16:31.650 "data_size": 0 00:16:31.650 } 00:16:31.650 ] 00:16:31.650 }' 00:16:31.650 18:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.650 18:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.166 [2024-07-15 18:30:24.326566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.166 BaseBdev2 00:16:32.166 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:32.166 18:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:32.166 18:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:32.166 18:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:32.166 18:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:32.166 18:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:32.166 18:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.424 18:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.683 [ 00:16:32.683 { 00:16:32.683 "name": "BaseBdev2", 00:16:32.683 "aliases": [ 00:16:32.683 "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5" 00:16:32.683 ], 00:16:32.683 "product_name": "Malloc disk", 00:16:32.683 "block_size": 512, 00:16:32.683 "num_blocks": 65536, 00:16:32.683 "uuid": "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5", 00:16:32.683 "assigned_rate_limits": { 00:16:32.683 "rw_ios_per_sec": 0, 00:16:32.683 "rw_mbytes_per_sec": 0, 00:16:32.683 "r_mbytes_per_sec": 0, 00:16:32.683 "w_mbytes_per_sec": 0 00:16:32.683 }, 00:16:32.683 "claimed": true, 00:16:32.683 "claim_type": "exclusive_write", 00:16:32.683 "zoned": false, 00:16:32.683 "supported_io_types": { 00:16:32.683 "read": true, 00:16:32.683 "write": true, 00:16:32.683 "unmap": true, 00:16:32.683 "flush": true, 00:16:32.683 "reset": true, 00:16:32.683 "nvme_admin": false, 00:16:32.683 "nvme_io": false, 00:16:32.683 "nvme_io_md": false, 00:16:32.683 "write_zeroes": true, 00:16:32.683 "zcopy": true, 00:16:32.683 "get_zone_info": false, 00:16:32.683 "zone_management": false, 00:16:32.683 "zone_append": false, 00:16:32.683 "compare": false, 00:16:32.683 "compare_and_write": false, 00:16:32.683 "abort": true, 00:16:32.683 "seek_hole": false, 00:16:32.683 "seek_data": false, 00:16:32.683 "copy": true, 00:16:32.683 "nvme_iov_md": false 00:16:32.683 }, 00:16:32.683 "memory_domains": [ 00:16:32.683 { 00:16:32.683 "dma_device_id": "system", 00:16:32.683 "dma_device_type": 1 00:16:32.683 }, 00:16:32.683 { 00:16:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.683 "dma_device_type": 2 00:16:32.683 } 00:16:32.683 ], 00:16:32.683 "driver_specific": {} 00:16:32.683 } 00:16:32.683 ] 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.683 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.684 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.684 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.684 18:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.942 18:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.942 "name": "Existed_Raid", 00:16:32.942 "uuid": "4c79a1d0-42d8-11ef-9ade-d5fc5159efa5", 00:16:32.942 "strip_size_kb": 0, 00:16:32.942 "state": "configuring", 00:16:32.942 "raid_level": "raid1", 00:16:32.942 "superblock": true, 00:16:32.942 "num_base_bdevs": 4, 00:16:32.942 "num_base_bdevs_discovered": 2, 00:16:32.942 "num_base_bdevs_operational": 4, 00:16:32.942 "base_bdevs_list": [ 00:16:32.942 { 00:16:32.942 "name": "BaseBdev1", 00:16:32.942 "uuid": "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5", 00:16:32.942 "is_configured": true, 00:16:32.942 "data_offset": 2048, 00:16:32.942 "data_size": 63488 00:16:32.942 }, 00:16:32.942 { 00:16:32.942 "name": "BaseBdev2", 00:16:32.942 "uuid": "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5", 00:16:32.942 "is_configured": true, 00:16:32.942 "data_offset": 2048, 00:16:32.942 "data_size": 63488 00:16:32.942 }, 00:16:32.942 { 00:16:32.942 "name": "BaseBdev3", 00:16:32.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.942 "is_configured": false, 00:16:32.942 "data_offset": 0, 00:16:32.942 "data_size": 0 00:16:32.942 }, 00:16:32.942 { 00:16:32.942 "name": "BaseBdev4", 00:16:32.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.942 "is_configured": false, 00:16:32.942 "data_offset": 0, 00:16:32.942 "data_size": 0 00:16:32.942 } 00:16:32.942 ] 00:16:32.942 }' 00:16:32.942 18:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.942 18:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.200 18:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.459 [2024-07-15 18:30:25.826675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.459 BaseBdev3 00:16:33.459 18:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:33.459 18:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:33.459 18:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:33.459 18:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:33.459 18:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:33.459 18:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:33.459 18:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.719 18:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.978 [ 00:16:33.978 { 00:16:33.978 "name": "BaseBdev3", 00:16:33.978 "aliases": [ 00:16:33.978 "4dd9d55d-42d8-11ef-9ade-d5fc5159efa5" 00:16:33.978 ], 00:16:33.978 "product_name": "Malloc disk", 00:16:33.978 "block_size": 512, 00:16:33.978 "num_blocks": 65536, 00:16:33.978 "uuid": "4dd9d55d-42d8-11ef-9ade-d5fc5159efa5", 00:16:33.978 "assigned_rate_limits": { 00:16:33.978 "rw_ios_per_sec": 0, 00:16:33.978 "rw_mbytes_per_sec": 0, 00:16:33.978 "r_mbytes_per_sec": 0, 00:16:33.978 "w_mbytes_per_sec": 0 00:16:33.978 }, 00:16:33.978 "claimed": true, 00:16:33.978 "claim_type": "exclusive_write", 00:16:33.978 "zoned": false, 00:16:33.978 "supported_io_types": { 00:16:33.978 "read": true, 00:16:33.978 "write": true, 00:16:33.978 "unmap": true, 00:16:33.978 "flush": true, 00:16:33.978 "reset": true, 00:16:33.978 "nvme_admin": false, 00:16:33.978 "nvme_io": false, 00:16:33.978 "nvme_io_md": false, 00:16:33.978 "write_zeroes": true, 00:16:33.978 "zcopy": true, 00:16:33.978 "get_zone_info": false, 00:16:33.978 "zone_management": false, 00:16:33.978 "zone_append": false, 00:16:33.978 "compare": false, 00:16:33.978 "compare_and_write": false, 00:16:33.978 "abort": true, 00:16:33.978 "seek_hole": false, 00:16:33.978 "seek_data": false, 00:16:33.978 "copy": true, 00:16:33.978 "nvme_iov_md": false 00:16:33.978 }, 00:16:33.978 "memory_domains": [ 00:16:33.978 { 00:16:33.978 "dma_device_id": "system", 00:16:33.978 "dma_device_type": 1 00:16:33.978 }, 00:16:33.978 { 00:16:33.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.978 "dma_device_type": 2 00:16:33.978 } 00:16:33.978 ], 00:16:33.978 "driver_specific": {} 00:16:33.978 } 00:16:33.978 ] 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.978 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.237 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.237 "name": "Existed_Raid", 00:16:34.237 "uuid": "4c79a1d0-42d8-11ef-9ade-d5fc5159efa5", 00:16:34.237 "strip_size_kb": 0, 00:16:34.237 "state": "configuring", 00:16:34.237 "raid_level": "raid1", 00:16:34.237 "superblock": true, 00:16:34.237 "num_base_bdevs": 4, 00:16:34.237 "num_base_bdevs_discovered": 3, 00:16:34.237 "num_base_bdevs_operational": 4, 00:16:34.237 "base_bdevs_list": [ 00:16:34.237 { 00:16:34.237 "name": "BaseBdev1", 00:16:34.237 "uuid": "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5", 00:16:34.237 "is_configured": true, 00:16:34.237 "data_offset": 2048, 00:16:34.237 "data_size": 63488 00:16:34.237 }, 00:16:34.237 { 00:16:34.237 "name": "BaseBdev2", 00:16:34.237 "uuid": "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5", 00:16:34.237 "is_configured": true, 00:16:34.237 "data_offset": 2048, 00:16:34.237 "data_size": 63488 00:16:34.237 }, 00:16:34.237 { 00:16:34.237 "name": "BaseBdev3", 00:16:34.237 "uuid": "4dd9d55d-42d8-11ef-9ade-d5fc5159efa5", 00:16:34.237 "is_configured": true, 00:16:34.237 "data_offset": 2048, 00:16:34.237 "data_size": 63488 00:16:34.237 }, 00:16:34.237 { 00:16:34.237 "name": "BaseBdev4", 00:16:34.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.237 "is_configured": false, 00:16:34.237 "data_offset": 0, 00:16:34.237 "data_size": 0 00:16:34.237 } 00:16:34.237 ] 00:16:34.237 }' 00:16:34.237 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.237 18:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.805 18:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:35.063 [2024-07-15 18:30:27.214776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:35.063 [2024-07-15 18:30:27.214858] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x76c59434a00 00:16:35.063 [2024-07-15 18:30:27.214865] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:35.063 [2024-07-15 18:30:27.214888] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x76c59497e20 00:16:35.063 [2024-07-15 18:30:27.214953] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x76c59434a00 00:16:35.063 [2024-07-15 18:30:27.214958] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x76c59434a00 00:16:35.063 [2024-07-15 18:30:27.214981] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.063 BaseBdev4 00:16:35.063 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:35.063 18:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:35.063 18:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.064 18:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:35.064 18:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.064 18:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.064 18:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.323 18:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:35.581 [ 00:16:35.581 { 00:16:35.581 "name": "BaseBdev4", 00:16:35.581 "aliases": [ 00:16:35.581 "4eada419-42d8-11ef-9ade-d5fc5159efa5" 00:16:35.581 ], 00:16:35.581 "product_name": "Malloc disk", 00:16:35.581 "block_size": 512, 00:16:35.581 "num_blocks": 65536, 00:16:35.581 "uuid": "4eada419-42d8-11ef-9ade-d5fc5159efa5", 00:16:35.581 "assigned_rate_limits": { 00:16:35.581 "rw_ios_per_sec": 0, 00:16:35.581 "rw_mbytes_per_sec": 0, 00:16:35.581 "r_mbytes_per_sec": 0, 00:16:35.581 "w_mbytes_per_sec": 0 00:16:35.581 }, 00:16:35.581 "claimed": true, 00:16:35.581 "claim_type": "exclusive_write", 00:16:35.581 "zoned": false, 00:16:35.581 "supported_io_types": { 00:16:35.581 "read": true, 00:16:35.581 "write": true, 00:16:35.581 "unmap": true, 00:16:35.581 "flush": true, 00:16:35.581 "reset": true, 00:16:35.581 "nvme_admin": false, 00:16:35.581 "nvme_io": false, 00:16:35.581 "nvme_io_md": false, 00:16:35.581 "write_zeroes": true, 00:16:35.581 "zcopy": true, 00:16:35.581 "get_zone_info": false, 00:16:35.581 "zone_management": false, 00:16:35.581 "zone_append": false, 00:16:35.581 "compare": false, 00:16:35.581 "compare_and_write": false, 00:16:35.581 "abort": true, 00:16:35.581 "seek_hole": false, 00:16:35.581 "seek_data": false, 00:16:35.581 "copy": true, 00:16:35.581 "nvme_iov_md": false 00:16:35.581 }, 00:16:35.581 "memory_domains": [ 00:16:35.581 { 00:16:35.581 "dma_device_id": "system", 00:16:35.581 "dma_device_type": 1 00:16:35.581 }, 00:16:35.581 { 00:16:35.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.581 "dma_device_type": 2 00:16:35.581 } 00:16:35.581 ], 00:16:35.581 "driver_specific": {} 00:16:35.581 } 00:16:35.581 ] 00:16:35.581 18:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:35.581 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:35.581 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.582 18:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.841 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.841 "name": "Existed_Raid", 00:16:35.841 "uuid": "4c79a1d0-42d8-11ef-9ade-d5fc5159efa5", 00:16:35.841 "strip_size_kb": 0, 00:16:35.841 "state": "online", 00:16:35.841 "raid_level": "raid1", 00:16:35.841 "superblock": true, 00:16:35.841 "num_base_bdevs": 4, 00:16:35.841 "num_base_bdevs_discovered": 4, 00:16:35.841 "num_base_bdevs_operational": 4, 00:16:35.841 "base_bdevs_list": [ 00:16:35.841 { 00:16:35.841 "name": "BaseBdev1", 00:16:35.841 "uuid": "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 2048, 00:16:35.841 "data_size": 63488 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev2", 00:16:35.841 "uuid": "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 2048, 00:16:35.841 "data_size": 63488 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev3", 00:16:35.841 "uuid": "4dd9d55d-42d8-11ef-9ade-d5fc5159efa5", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 2048, 00:16:35.841 "data_size": 63488 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev4", 00:16:35.841 "uuid": "4eada419-42d8-11ef-9ade-d5fc5159efa5", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 2048, 00:16:35.841 "data_size": 63488 00:16:35.841 } 00:16:35.841 ] 00:16:35.841 }' 00:16:35.841 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.841 18:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.100 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.100 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:36.100 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:36.100 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:36.100 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:36.100 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:36.100 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:36.100 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:36.358 [2024-07-15 18:30:28.642796] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.359 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:36.359 "name": "Existed_Raid", 00:16:36.359 "aliases": [ 00:16:36.359 "4c79a1d0-42d8-11ef-9ade-d5fc5159efa5" 00:16:36.359 ], 00:16:36.359 "product_name": "Raid Volume", 00:16:36.359 "block_size": 512, 00:16:36.359 "num_blocks": 63488, 00:16:36.359 "uuid": "4c79a1d0-42d8-11ef-9ade-d5fc5159efa5", 00:16:36.359 "assigned_rate_limits": { 00:16:36.359 "rw_ios_per_sec": 0, 00:16:36.359 "rw_mbytes_per_sec": 0, 00:16:36.359 "r_mbytes_per_sec": 0, 00:16:36.359 "w_mbytes_per_sec": 0 00:16:36.359 }, 00:16:36.359 "claimed": false, 00:16:36.359 "zoned": false, 00:16:36.359 "supported_io_types": { 00:16:36.359 "read": true, 00:16:36.359 "write": true, 00:16:36.359 "unmap": false, 00:16:36.359 "flush": false, 00:16:36.359 "reset": true, 00:16:36.359 "nvme_admin": false, 00:16:36.359 "nvme_io": false, 00:16:36.359 "nvme_io_md": false, 00:16:36.359 "write_zeroes": true, 00:16:36.359 "zcopy": false, 00:16:36.359 "get_zone_info": false, 00:16:36.359 "zone_management": false, 00:16:36.359 "zone_append": false, 00:16:36.359 "compare": false, 00:16:36.359 "compare_and_write": false, 00:16:36.359 "abort": false, 00:16:36.359 "seek_hole": false, 00:16:36.359 "seek_data": false, 00:16:36.359 "copy": false, 00:16:36.359 "nvme_iov_md": false 00:16:36.359 }, 00:16:36.359 "memory_domains": [ 00:16:36.359 { 00:16:36.359 "dma_device_id": "system", 00:16:36.359 "dma_device_type": 1 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.359 "dma_device_type": 2 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "dma_device_id": "system", 00:16:36.359 "dma_device_type": 1 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.359 "dma_device_type": 2 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "dma_device_id": "system", 00:16:36.359 "dma_device_type": 1 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.359 "dma_device_type": 2 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "dma_device_id": "system", 00:16:36.359 "dma_device_type": 1 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.359 "dma_device_type": 2 00:16:36.359 } 00:16:36.359 ], 00:16:36.359 "driver_specific": { 00:16:36.359 "raid": { 00:16:36.359 "uuid": "4c79a1d0-42d8-11ef-9ade-d5fc5159efa5", 00:16:36.359 "strip_size_kb": 0, 00:16:36.359 "state": "online", 00:16:36.359 "raid_level": "raid1", 00:16:36.359 "superblock": true, 00:16:36.359 "num_base_bdevs": 4, 00:16:36.359 "num_base_bdevs_discovered": 4, 00:16:36.359 "num_base_bdevs_operational": 4, 00:16:36.359 "base_bdevs_list": [ 00:16:36.359 { 00:16:36.359 "name": "BaseBdev1", 00:16:36.359 "uuid": "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5", 00:16:36.359 "is_configured": true, 00:16:36.359 "data_offset": 2048, 00:16:36.359 "data_size": 63488 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "name": "BaseBdev2", 00:16:36.359 "uuid": "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5", 00:16:36.359 "is_configured": true, 00:16:36.359 "data_offset": 2048, 00:16:36.359 "data_size": 63488 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "name": "BaseBdev3", 00:16:36.359 "uuid": "4dd9d55d-42d8-11ef-9ade-d5fc5159efa5", 00:16:36.359 "is_configured": true, 00:16:36.359 "data_offset": 2048, 00:16:36.359 "data_size": 63488 00:16:36.359 }, 00:16:36.359 { 00:16:36.359 "name": "BaseBdev4", 00:16:36.359 "uuid": "4eada419-42d8-11ef-9ade-d5fc5159efa5", 00:16:36.359 "is_configured": true, 00:16:36.359 "data_offset": 2048, 00:16:36.359 "data_size": 63488 00:16:36.359 } 00:16:36.359 ] 00:16:36.359 } 00:16:36.359 } 00:16:36.359 }' 00:16:36.359 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.359 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:36.359 BaseBdev2 00:16:36.359 BaseBdev3 00:16:36.359 BaseBdev4' 00:16:36.359 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.359 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:36.359 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.618 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.618 "name": "BaseBdev1", 00:16:36.618 "aliases": [ 00:16:36.618 "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5" 00:16:36.618 ], 00:16:36.618 "product_name": "Malloc disk", 00:16:36.618 "block_size": 512, 00:16:36.618 "num_blocks": 65536, 00:16:36.618 "uuid": "4b7ebf75-42d8-11ef-9ade-d5fc5159efa5", 00:16:36.618 "assigned_rate_limits": { 00:16:36.618 "rw_ios_per_sec": 0, 00:16:36.618 "rw_mbytes_per_sec": 0, 00:16:36.618 "r_mbytes_per_sec": 0, 00:16:36.618 "w_mbytes_per_sec": 0 00:16:36.618 }, 00:16:36.618 "claimed": true, 00:16:36.618 "claim_type": "exclusive_write", 00:16:36.618 "zoned": false, 00:16:36.618 "supported_io_types": { 00:16:36.618 "read": true, 00:16:36.618 "write": true, 00:16:36.618 "unmap": true, 00:16:36.618 "flush": true, 00:16:36.618 "reset": true, 00:16:36.618 "nvme_admin": false, 00:16:36.618 "nvme_io": false, 00:16:36.618 "nvme_io_md": false, 00:16:36.618 "write_zeroes": true, 00:16:36.618 "zcopy": true, 00:16:36.618 "get_zone_info": false, 00:16:36.618 "zone_management": false, 00:16:36.618 "zone_append": false, 00:16:36.618 "compare": false, 00:16:36.618 "compare_and_write": false, 00:16:36.618 "abort": true, 00:16:36.618 "seek_hole": false, 00:16:36.618 "seek_data": false, 00:16:36.618 "copy": true, 00:16:36.618 "nvme_iov_md": false 00:16:36.618 }, 00:16:36.618 "memory_domains": [ 00:16:36.618 { 00:16:36.618 "dma_device_id": "system", 00:16:36.618 "dma_device_type": 1 00:16:36.618 }, 00:16:36.618 { 00:16:36.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.618 "dma_device_type": 2 00:16:36.618 } 00:16:36.618 ], 00:16:36.618 "driver_specific": {} 00:16:36.618 }' 00:16:36.618 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.618 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.618 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.618 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.618 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.618 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:36.618 18:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.618 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.618 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:36.618 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.877 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.877 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:36.877 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.877 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.877 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:36.877 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.877 "name": "BaseBdev2", 00:16:36.877 "aliases": [ 00:16:36.877 "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5" 00:16:36.877 ], 00:16:36.877 "product_name": "Malloc disk", 00:16:36.877 "block_size": 512, 00:16:36.877 "num_blocks": 65536, 00:16:36.877 "uuid": "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5", 00:16:36.877 "assigned_rate_limits": { 00:16:36.878 "rw_ios_per_sec": 0, 00:16:36.878 "rw_mbytes_per_sec": 0, 00:16:36.878 "r_mbytes_per_sec": 0, 00:16:36.878 "w_mbytes_per_sec": 0 00:16:36.878 }, 00:16:36.878 "claimed": true, 00:16:36.878 "claim_type": "exclusive_write", 00:16:36.878 "zoned": false, 00:16:36.878 "supported_io_types": { 00:16:36.878 "read": true, 00:16:36.878 "write": true, 00:16:36.878 "unmap": true, 00:16:36.878 "flush": true, 00:16:36.878 "reset": true, 00:16:36.878 "nvme_admin": false, 00:16:36.878 "nvme_io": false, 00:16:36.878 "nvme_io_md": false, 00:16:36.878 "write_zeroes": true, 00:16:36.878 "zcopy": true, 00:16:36.878 "get_zone_info": false, 00:16:36.878 "zone_management": false, 00:16:36.878 "zone_append": false, 00:16:36.878 "compare": false, 00:16:36.878 "compare_and_write": false, 00:16:36.878 "abort": true, 00:16:36.878 "seek_hole": false, 00:16:36.878 "seek_data": false, 00:16:36.878 "copy": true, 00:16:36.878 "nvme_iov_md": false 00:16:36.878 }, 00:16:36.878 "memory_domains": [ 00:16:36.878 { 00:16:36.878 "dma_device_id": "system", 00:16:36.878 "dma_device_type": 1 00:16:36.878 }, 00:16:36.878 { 00:16:36.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.878 "dma_device_type": 2 00:16:36.878 } 00:16:36.878 ], 00:16:36.878 "driver_specific": {} 00:16:36.878 }' 00:16:36.878 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:37.137 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:37.397 "name": "BaseBdev3", 00:16:37.397 "aliases": [ 00:16:37.397 "4dd9d55d-42d8-11ef-9ade-d5fc5159efa5" 00:16:37.397 ], 00:16:37.397 "product_name": "Malloc disk", 00:16:37.397 "block_size": 512, 00:16:37.397 "num_blocks": 65536, 00:16:37.397 "uuid": "4dd9d55d-42d8-11ef-9ade-d5fc5159efa5", 00:16:37.397 "assigned_rate_limits": { 00:16:37.397 "rw_ios_per_sec": 0, 00:16:37.397 "rw_mbytes_per_sec": 0, 00:16:37.397 "r_mbytes_per_sec": 0, 00:16:37.397 "w_mbytes_per_sec": 0 00:16:37.397 }, 00:16:37.397 "claimed": true, 00:16:37.397 "claim_type": "exclusive_write", 00:16:37.397 "zoned": false, 00:16:37.397 "supported_io_types": { 00:16:37.397 "read": true, 00:16:37.397 "write": true, 00:16:37.397 "unmap": true, 00:16:37.397 "flush": true, 00:16:37.397 "reset": true, 00:16:37.397 "nvme_admin": false, 00:16:37.397 "nvme_io": false, 00:16:37.397 "nvme_io_md": false, 00:16:37.397 "write_zeroes": true, 00:16:37.397 "zcopy": true, 00:16:37.397 "get_zone_info": false, 00:16:37.397 "zone_management": false, 00:16:37.397 "zone_append": false, 00:16:37.397 "compare": false, 00:16:37.397 "compare_and_write": false, 00:16:37.397 "abort": true, 00:16:37.397 "seek_hole": false, 00:16:37.397 "seek_data": false, 00:16:37.397 "copy": true, 00:16:37.397 "nvme_iov_md": false 00:16:37.397 }, 00:16:37.397 "memory_domains": [ 00:16:37.397 { 00:16:37.397 "dma_device_id": "system", 00:16:37.397 "dma_device_type": 1 00:16:37.397 }, 00:16:37.397 { 00:16:37.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.397 "dma_device_type": 2 00:16:37.397 } 00:16:37.397 ], 00:16:37.397 "driver_specific": {} 00:16:37.397 }' 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:37.397 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:37.398 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:37.657 "name": "BaseBdev4", 00:16:37.657 "aliases": [ 00:16:37.657 "4eada419-42d8-11ef-9ade-d5fc5159efa5" 00:16:37.657 ], 00:16:37.657 "product_name": "Malloc disk", 00:16:37.657 "block_size": 512, 00:16:37.657 "num_blocks": 65536, 00:16:37.657 "uuid": "4eada419-42d8-11ef-9ade-d5fc5159efa5", 00:16:37.657 "assigned_rate_limits": { 00:16:37.657 "rw_ios_per_sec": 0, 00:16:37.657 "rw_mbytes_per_sec": 0, 00:16:37.657 "r_mbytes_per_sec": 0, 00:16:37.657 "w_mbytes_per_sec": 0 00:16:37.657 }, 00:16:37.657 "claimed": true, 00:16:37.657 "claim_type": "exclusive_write", 00:16:37.657 "zoned": false, 00:16:37.657 "supported_io_types": { 00:16:37.657 "read": true, 00:16:37.657 "write": true, 00:16:37.657 "unmap": true, 00:16:37.657 "flush": true, 00:16:37.657 "reset": true, 00:16:37.657 "nvme_admin": false, 00:16:37.657 "nvme_io": false, 00:16:37.657 "nvme_io_md": false, 00:16:37.657 "write_zeroes": true, 00:16:37.657 "zcopy": true, 00:16:37.657 "get_zone_info": false, 00:16:37.657 "zone_management": false, 00:16:37.657 "zone_append": false, 00:16:37.657 "compare": false, 00:16:37.657 "compare_and_write": false, 00:16:37.657 "abort": true, 00:16:37.657 "seek_hole": false, 00:16:37.657 "seek_data": false, 00:16:37.657 "copy": true, 00:16:37.657 "nvme_iov_md": false 00:16:37.657 }, 00:16:37.657 "memory_domains": [ 00:16:37.657 { 00:16:37.657 "dma_device_id": "system", 00:16:37.657 "dma_device_type": 1 00:16:37.657 }, 00:16:37.657 { 00:16:37.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.657 "dma_device_type": 2 00:16:37.657 } 00:16:37.657 ], 00:16:37.657 "driver_specific": {} 00:16:37.657 }' 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.657 18:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:37.916 [2024-07-15 18:30:30.254888] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.916 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.175 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.175 "name": "Existed_Raid", 00:16:38.175 "uuid": "4c79a1d0-42d8-11ef-9ade-d5fc5159efa5", 00:16:38.175 "strip_size_kb": 0, 00:16:38.175 "state": "online", 00:16:38.175 "raid_level": "raid1", 00:16:38.175 "superblock": true, 00:16:38.175 "num_base_bdevs": 4, 00:16:38.175 "num_base_bdevs_discovered": 3, 00:16:38.175 "num_base_bdevs_operational": 3, 00:16:38.175 "base_bdevs_list": [ 00:16:38.175 { 00:16:38.175 "name": null, 00:16:38.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.175 "is_configured": false, 00:16:38.175 "data_offset": 2048, 00:16:38.175 "data_size": 63488 00:16:38.175 }, 00:16:38.175 { 00:16:38.175 "name": "BaseBdev2", 00:16:38.175 "uuid": "4cf4eedb-42d8-11ef-9ade-d5fc5159efa5", 00:16:38.175 "is_configured": true, 00:16:38.175 "data_offset": 2048, 00:16:38.175 "data_size": 63488 00:16:38.175 }, 00:16:38.175 { 00:16:38.175 "name": "BaseBdev3", 00:16:38.175 "uuid": "4dd9d55d-42d8-11ef-9ade-d5fc5159efa5", 00:16:38.175 "is_configured": true, 00:16:38.175 "data_offset": 2048, 00:16:38.175 "data_size": 63488 00:16:38.175 }, 00:16:38.175 { 00:16:38.175 "name": "BaseBdev4", 00:16:38.175 "uuid": "4eada419-42d8-11ef-9ade-d5fc5159efa5", 00:16:38.175 "is_configured": true, 00:16:38.175 "data_offset": 2048, 00:16:38.175 "data_size": 63488 00:16:38.175 } 00:16:38.175 ] 00:16:38.175 }' 00:16:38.175 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.175 18:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.742 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:38.742 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:38.742 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.742 18:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:39.000 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:39.000 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.000 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:39.258 [2024-07-15 18:30:31.397005] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:39.258 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:39.258 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:39.258 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.258 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:39.516 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:39.516 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.516 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:39.516 [2024-07-15 18:30:31.905394] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:39.774 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:39.774 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:39.774 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.774 18:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:39.774 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:39.774 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.774 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:40.033 [2024-07-15 18:30:32.381849] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:40.033 [2024-07-15 18:30:32.381907] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.033 [2024-07-15 18:30:32.387883] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.033 [2024-07-15 18:30:32.387905] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.033 [2024-07-15 18:30:32.387910] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x76c59434a00 name Existed_Raid, state offline 00:16:40.033 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:40.033 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:40.033 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.033 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:40.292 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:40.292 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:40.292 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:40.292 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:40.292 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:40.292 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.866 BaseBdev2 00:16:40.866 18:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:40.866 18:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:40.866 18:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:40.866 18:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:40.866 18:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:40.866 18:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:40.866 18:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.866 18:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:41.124 [ 00:16:41.124 { 00:16:41.124 "name": "BaseBdev2", 00:16:41.124 "aliases": [ 00:16:41.124 "5217dad7-42d8-11ef-9ade-d5fc5159efa5" 00:16:41.124 ], 00:16:41.124 "product_name": "Malloc disk", 00:16:41.124 "block_size": 512, 00:16:41.124 "num_blocks": 65536, 00:16:41.124 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:41.124 "assigned_rate_limits": { 00:16:41.124 "rw_ios_per_sec": 0, 00:16:41.124 "rw_mbytes_per_sec": 0, 00:16:41.124 "r_mbytes_per_sec": 0, 00:16:41.124 "w_mbytes_per_sec": 0 00:16:41.124 }, 00:16:41.124 "claimed": false, 00:16:41.124 "zoned": false, 00:16:41.124 "supported_io_types": { 00:16:41.124 "read": true, 00:16:41.124 "write": true, 00:16:41.124 "unmap": true, 00:16:41.124 "flush": true, 00:16:41.124 "reset": true, 00:16:41.124 "nvme_admin": false, 00:16:41.124 "nvme_io": false, 00:16:41.124 "nvme_io_md": false, 00:16:41.124 "write_zeroes": true, 00:16:41.124 "zcopy": true, 00:16:41.124 "get_zone_info": false, 00:16:41.124 "zone_management": false, 00:16:41.124 "zone_append": false, 00:16:41.124 "compare": false, 00:16:41.124 "compare_and_write": false, 00:16:41.124 "abort": true, 00:16:41.124 "seek_hole": false, 00:16:41.124 "seek_data": false, 00:16:41.124 "copy": true, 00:16:41.124 "nvme_iov_md": false 00:16:41.124 }, 00:16:41.124 "memory_domains": [ 00:16:41.124 { 00:16:41.124 "dma_device_id": "system", 00:16:41.124 "dma_device_type": 1 00:16:41.124 }, 00:16:41.124 { 00:16:41.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.124 "dma_device_type": 2 00:16:41.124 } 00:16:41.124 ], 00:16:41.124 "driver_specific": {} 00:16:41.124 } 00:16:41.124 ] 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:41.383 BaseBdev3 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.383 18:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.642 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:41.901 [ 00:16:41.901 { 00:16:41.901 "name": "BaseBdev3", 00:16:41.901 "aliases": [ 00:16:41.901 "52928b16-42d8-11ef-9ade-d5fc5159efa5" 00:16:41.901 ], 00:16:41.901 "product_name": "Malloc disk", 00:16:41.901 "block_size": 512, 00:16:41.901 "num_blocks": 65536, 00:16:41.901 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:41.901 "assigned_rate_limits": { 00:16:41.901 "rw_ios_per_sec": 0, 00:16:41.901 "rw_mbytes_per_sec": 0, 00:16:41.901 "r_mbytes_per_sec": 0, 00:16:41.901 "w_mbytes_per_sec": 0 00:16:41.901 }, 00:16:41.901 "claimed": false, 00:16:41.901 "zoned": false, 00:16:41.901 "supported_io_types": { 00:16:41.901 "read": true, 00:16:41.901 "write": true, 00:16:41.901 "unmap": true, 00:16:41.901 "flush": true, 00:16:41.901 "reset": true, 00:16:41.901 "nvme_admin": false, 00:16:41.901 "nvme_io": false, 00:16:41.901 "nvme_io_md": false, 00:16:41.901 "write_zeroes": true, 00:16:41.901 "zcopy": true, 00:16:41.901 "get_zone_info": false, 00:16:41.901 "zone_management": false, 00:16:41.901 "zone_append": false, 00:16:41.901 "compare": false, 00:16:41.901 "compare_and_write": false, 00:16:41.901 "abort": true, 00:16:41.901 "seek_hole": false, 00:16:41.901 "seek_data": false, 00:16:41.901 "copy": true, 00:16:41.901 "nvme_iov_md": false 00:16:41.901 }, 00:16:41.901 "memory_domains": [ 00:16:41.901 { 00:16:41.901 "dma_device_id": "system", 00:16:41.901 "dma_device_type": 1 00:16:41.901 }, 00:16:41.901 { 00:16:41.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.901 "dma_device_type": 2 00:16:41.901 } 00:16:41.901 ], 00:16:41.901 "driver_specific": {} 00:16:41.901 } 00:16:41.901 ] 00:16:41.901 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:41.901 18:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:41.901 18:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:41.901 18:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:42.160 BaseBdev4 00:16:42.160 18:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:42.160 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:42.160 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:42.160 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:42.160 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:42.160 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:42.160 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.418 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:42.677 [ 00:16:42.677 { 00:16:42.677 "name": "BaseBdev4", 00:16:42.677 "aliases": [ 00:16:42.677 "53037755-42d8-11ef-9ade-d5fc5159efa5" 00:16:42.677 ], 00:16:42.677 "product_name": "Malloc disk", 00:16:42.677 "block_size": 512, 00:16:42.678 "num_blocks": 65536, 00:16:42.678 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:42.678 "assigned_rate_limits": { 00:16:42.678 "rw_ios_per_sec": 0, 00:16:42.678 "rw_mbytes_per_sec": 0, 00:16:42.678 "r_mbytes_per_sec": 0, 00:16:42.678 "w_mbytes_per_sec": 0 00:16:42.678 }, 00:16:42.678 "claimed": false, 00:16:42.678 "zoned": false, 00:16:42.678 "supported_io_types": { 00:16:42.678 "read": true, 00:16:42.678 "write": true, 00:16:42.678 "unmap": true, 00:16:42.678 "flush": true, 00:16:42.678 "reset": true, 00:16:42.678 "nvme_admin": false, 00:16:42.678 "nvme_io": false, 00:16:42.678 "nvme_io_md": false, 00:16:42.678 "write_zeroes": true, 00:16:42.678 "zcopy": true, 00:16:42.678 "get_zone_info": false, 00:16:42.678 "zone_management": false, 00:16:42.678 "zone_append": false, 00:16:42.678 "compare": false, 00:16:42.678 "compare_and_write": false, 00:16:42.678 "abort": true, 00:16:42.678 "seek_hole": false, 00:16:42.678 "seek_data": false, 00:16:42.678 "copy": true, 00:16:42.678 "nvme_iov_md": false 00:16:42.678 }, 00:16:42.678 "memory_domains": [ 00:16:42.678 { 00:16:42.678 "dma_device_id": "system", 00:16:42.678 "dma_device_type": 1 00:16:42.678 }, 00:16:42.678 { 00:16:42.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.678 "dma_device_type": 2 00:16:42.678 } 00:16:42.678 ], 00:16:42.678 "driver_specific": {} 00:16:42.678 } 00:16:42.678 ] 00:16:42.678 18:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:42.678 18:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:42.678 18:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:42.678 18:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:42.937 [2024-07-15 18:30:35.212055] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.937 [2024-07-15 18:30:35.212114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.937 [2024-07-15 18:30:35.212124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.937 [2024-07-15 18:30:35.212758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.937 [2024-07-15 18:30:35.212782] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.937 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.195 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.195 "name": "Existed_Raid", 00:16:43.195 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:43.195 "strip_size_kb": 0, 00:16:43.195 "state": "configuring", 00:16:43.195 "raid_level": "raid1", 00:16:43.195 "superblock": true, 00:16:43.195 "num_base_bdevs": 4, 00:16:43.195 "num_base_bdevs_discovered": 3, 00:16:43.195 "num_base_bdevs_operational": 4, 00:16:43.195 "base_bdevs_list": [ 00:16:43.195 { 00:16:43.195 "name": "BaseBdev1", 00:16:43.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.195 "is_configured": false, 00:16:43.195 "data_offset": 0, 00:16:43.195 "data_size": 0 00:16:43.195 }, 00:16:43.195 { 00:16:43.195 "name": "BaseBdev2", 00:16:43.195 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:43.195 "is_configured": true, 00:16:43.195 "data_offset": 2048, 00:16:43.195 "data_size": 63488 00:16:43.195 }, 00:16:43.195 { 00:16:43.195 "name": "BaseBdev3", 00:16:43.195 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:43.195 "is_configured": true, 00:16:43.195 "data_offset": 2048, 00:16:43.195 "data_size": 63488 00:16:43.195 }, 00:16:43.195 { 00:16:43.195 "name": "BaseBdev4", 00:16:43.195 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:43.195 "is_configured": true, 00:16:43.195 "data_offset": 2048, 00:16:43.195 "data_size": 63488 00:16:43.195 } 00:16:43.195 ] 00:16:43.195 }' 00:16:43.195 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.195 18:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.763 18:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:43.763 [2024-07-15 18:30:36.156116] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.020 "name": "Existed_Raid", 00:16:44.020 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:44.020 "strip_size_kb": 0, 00:16:44.020 "state": "configuring", 00:16:44.020 "raid_level": "raid1", 00:16:44.020 "superblock": true, 00:16:44.020 "num_base_bdevs": 4, 00:16:44.020 "num_base_bdevs_discovered": 2, 00:16:44.020 "num_base_bdevs_operational": 4, 00:16:44.020 "base_bdevs_list": [ 00:16:44.020 { 00:16:44.020 "name": "BaseBdev1", 00:16:44.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.020 "is_configured": false, 00:16:44.020 "data_offset": 0, 00:16:44.020 "data_size": 0 00:16:44.020 }, 00:16:44.020 { 00:16:44.020 "name": null, 00:16:44.020 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:44.020 "is_configured": false, 00:16:44.020 "data_offset": 2048, 00:16:44.020 "data_size": 63488 00:16:44.020 }, 00:16:44.020 { 00:16:44.020 "name": "BaseBdev3", 00:16:44.020 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:44.020 "is_configured": true, 00:16:44.020 "data_offset": 2048, 00:16:44.020 "data_size": 63488 00:16:44.020 }, 00:16:44.020 { 00:16:44.020 "name": "BaseBdev4", 00:16:44.020 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:44.020 "is_configured": true, 00:16:44.020 "data_offset": 2048, 00:16:44.020 "data_size": 63488 00:16:44.020 } 00:16:44.020 ] 00:16:44.020 }' 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.020 18:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.587 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.588 18:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:44.846 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:44.846 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:45.105 [2024-07-15 18:30:37.296353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.105 BaseBdev1 00:16:45.105 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:45.105 18:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:45.105 18:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:45.105 18:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:45.105 18:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:45.105 18:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:45.105 18:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.363 18:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:45.621 [ 00:16:45.621 { 00:16:45.621 "name": "BaseBdev1", 00:16:45.621 "aliases": [ 00:16:45.621 "54aff7b6-42d8-11ef-9ade-d5fc5159efa5" 00:16:45.621 ], 00:16:45.621 "product_name": "Malloc disk", 00:16:45.621 "block_size": 512, 00:16:45.621 "num_blocks": 65536, 00:16:45.621 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:45.621 "assigned_rate_limits": { 00:16:45.621 "rw_ios_per_sec": 0, 00:16:45.621 "rw_mbytes_per_sec": 0, 00:16:45.621 "r_mbytes_per_sec": 0, 00:16:45.621 "w_mbytes_per_sec": 0 00:16:45.621 }, 00:16:45.621 "claimed": true, 00:16:45.622 "claim_type": "exclusive_write", 00:16:45.622 "zoned": false, 00:16:45.622 "supported_io_types": { 00:16:45.622 "read": true, 00:16:45.622 "write": true, 00:16:45.622 "unmap": true, 00:16:45.622 "flush": true, 00:16:45.622 "reset": true, 00:16:45.622 "nvme_admin": false, 00:16:45.622 "nvme_io": false, 00:16:45.622 "nvme_io_md": false, 00:16:45.622 "write_zeroes": true, 00:16:45.622 "zcopy": true, 00:16:45.622 "get_zone_info": false, 00:16:45.622 "zone_management": false, 00:16:45.622 "zone_append": false, 00:16:45.622 "compare": false, 00:16:45.622 "compare_and_write": false, 00:16:45.622 "abort": true, 00:16:45.622 "seek_hole": false, 00:16:45.622 "seek_data": false, 00:16:45.622 "copy": true, 00:16:45.622 "nvme_iov_md": false 00:16:45.622 }, 00:16:45.622 "memory_domains": [ 00:16:45.622 { 00:16:45.622 "dma_device_id": "system", 00:16:45.622 "dma_device_type": 1 00:16:45.622 }, 00:16:45.622 { 00:16:45.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.622 "dma_device_type": 2 00:16:45.622 } 00:16:45.622 ], 00:16:45.622 "driver_specific": {} 00:16:45.622 } 00:16:45.622 ] 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.622 18:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.879 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.879 "name": "Existed_Raid", 00:16:45.879 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:45.879 "strip_size_kb": 0, 00:16:45.879 "state": "configuring", 00:16:45.879 "raid_level": "raid1", 00:16:45.879 "superblock": true, 00:16:45.879 "num_base_bdevs": 4, 00:16:45.879 "num_base_bdevs_discovered": 3, 00:16:45.879 "num_base_bdevs_operational": 4, 00:16:45.879 "base_bdevs_list": [ 00:16:45.879 { 00:16:45.879 "name": "BaseBdev1", 00:16:45.879 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:45.879 "is_configured": true, 00:16:45.879 "data_offset": 2048, 00:16:45.879 "data_size": 63488 00:16:45.879 }, 00:16:45.879 { 00:16:45.879 "name": null, 00:16:45.879 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:45.879 "is_configured": false, 00:16:45.879 "data_offset": 2048, 00:16:45.879 "data_size": 63488 00:16:45.879 }, 00:16:45.879 { 00:16:45.879 "name": "BaseBdev3", 00:16:45.879 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:45.879 "is_configured": true, 00:16:45.879 "data_offset": 2048, 00:16:45.879 "data_size": 63488 00:16:45.879 }, 00:16:45.879 { 00:16:45.879 "name": "BaseBdev4", 00:16:45.879 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:45.879 "is_configured": true, 00:16:45.879 "data_offset": 2048, 00:16:45.879 "data_size": 63488 00:16:45.879 } 00:16:45.879 ] 00:16:45.879 }' 00:16:45.879 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.879 18:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.137 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.137 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:46.393 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:46.393 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:46.650 [2024-07-15 18:30:38.916320] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.650 18:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.907 18:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.907 "name": "Existed_Raid", 00:16:46.907 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:46.907 "strip_size_kb": 0, 00:16:46.907 "state": "configuring", 00:16:46.907 "raid_level": "raid1", 00:16:46.907 "superblock": true, 00:16:46.907 "num_base_bdevs": 4, 00:16:46.907 "num_base_bdevs_discovered": 2, 00:16:46.907 "num_base_bdevs_operational": 4, 00:16:46.907 "base_bdevs_list": [ 00:16:46.907 { 00:16:46.907 "name": "BaseBdev1", 00:16:46.907 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:46.907 "is_configured": true, 00:16:46.907 "data_offset": 2048, 00:16:46.907 "data_size": 63488 00:16:46.907 }, 00:16:46.907 { 00:16:46.907 "name": null, 00:16:46.907 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:46.907 "is_configured": false, 00:16:46.907 "data_offset": 2048, 00:16:46.907 "data_size": 63488 00:16:46.907 }, 00:16:46.907 { 00:16:46.907 "name": null, 00:16:46.907 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:46.907 "is_configured": false, 00:16:46.907 "data_offset": 2048, 00:16:46.907 "data_size": 63488 00:16:46.907 }, 00:16:46.907 { 00:16:46.907 "name": "BaseBdev4", 00:16:46.907 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:46.908 "is_configured": true, 00:16:46.908 "data_offset": 2048, 00:16:46.908 "data_size": 63488 00:16:46.908 } 00:16:46.908 ] 00:16:46.908 }' 00:16:46.908 18:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.908 18:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.165 18:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.165 18:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:47.731 18:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:47.731 18:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:47.731 [2024-07-15 18:30:40.048411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.731 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.990 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.990 "name": "Existed_Raid", 00:16:47.990 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:47.990 "strip_size_kb": 0, 00:16:47.990 "state": "configuring", 00:16:47.990 "raid_level": "raid1", 00:16:47.990 "superblock": true, 00:16:47.990 "num_base_bdevs": 4, 00:16:47.990 "num_base_bdevs_discovered": 3, 00:16:47.990 "num_base_bdevs_operational": 4, 00:16:47.990 "base_bdevs_list": [ 00:16:47.990 { 00:16:47.990 "name": "BaseBdev1", 00:16:47.990 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:47.990 "is_configured": true, 00:16:47.990 "data_offset": 2048, 00:16:47.990 "data_size": 63488 00:16:47.990 }, 00:16:47.990 { 00:16:47.990 "name": null, 00:16:47.990 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:47.990 "is_configured": false, 00:16:47.990 "data_offset": 2048, 00:16:47.990 "data_size": 63488 00:16:47.990 }, 00:16:47.990 { 00:16:47.990 "name": "BaseBdev3", 00:16:47.990 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:47.990 "is_configured": true, 00:16:47.990 "data_offset": 2048, 00:16:47.990 "data_size": 63488 00:16:47.990 }, 00:16:47.990 { 00:16:47.990 "name": "BaseBdev4", 00:16:47.990 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:47.990 "is_configured": true, 00:16:47.990 "data_offset": 2048, 00:16:47.990 "data_size": 63488 00:16:47.990 } 00:16:47.990 ] 00:16:47.990 }' 00:16:47.990 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.990 18:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.249 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.249 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.885 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:48.885 18:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:48.885 [2024-07-15 18:30:41.152498] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.885 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.143 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.143 "name": "Existed_Raid", 00:16:49.143 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:49.143 "strip_size_kb": 0, 00:16:49.143 "state": "configuring", 00:16:49.143 "raid_level": "raid1", 00:16:49.143 "superblock": true, 00:16:49.143 "num_base_bdevs": 4, 00:16:49.143 "num_base_bdevs_discovered": 2, 00:16:49.143 "num_base_bdevs_operational": 4, 00:16:49.143 "base_bdevs_list": [ 00:16:49.143 { 00:16:49.143 "name": null, 00:16:49.143 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:49.143 "is_configured": false, 00:16:49.143 "data_offset": 2048, 00:16:49.143 "data_size": 63488 00:16:49.143 }, 00:16:49.143 { 00:16:49.143 "name": null, 00:16:49.143 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:49.143 "is_configured": false, 00:16:49.143 "data_offset": 2048, 00:16:49.143 "data_size": 63488 00:16:49.143 }, 00:16:49.143 { 00:16:49.143 "name": "BaseBdev3", 00:16:49.143 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:49.143 "is_configured": true, 00:16:49.143 "data_offset": 2048, 00:16:49.143 "data_size": 63488 00:16:49.143 }, 00:16:49.143 { 00:16:49.143 "name": "BaseBdev4", 00:16:49.143 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:49.143 "is_configured": true, 00:16:49.143 "data_offset": 2048, 00:16:49.143 "data_size": 63488 00:16:49.143 } 00:16:49.143 ] 00:16:49.143 }' 00:16:49.143 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.143 18:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.401 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.401 18:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:49.967 [2024-07-15 18:30:42.342530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.967 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.225 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.225 "name": "Existed_Raid", 00:16:50.225 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:50.225 "strip_size_kb": 0, 00:16:50.225 "state": "configuring", 00:16:50.225 "raid_level": "raid1", 00:16:50.225 "superblock": true, 00:16:50.225 "num_base_bdevs": 4, 00:16:50.225 "num_base_bdevs_discovered": 3, 00:16:50.225 "num_base_bdevs_operational": 4, 00:16:50.225 "base_bdevs_list": [ 00:16:50.225 { 00:16:50.225 "name": null, 00:16:50.225 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:50.225 "is_configured": false, 00:16:50.225 "data_offset": 2048, 00:16:50.225 "data_size": 63488 00:16:50.225 }, 00:16:50.225 { 00:16:50.225 "name": "BaseBdev2", 00:16:50.225 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:50.225 "is_configured": true, 00:16:50.225 "data_offset": 2048, 00:16:50.225 "data_size": 63488 00:16:50.225 }, 00:16:50.225 { 00:16:50.225 "name": "BaseBdev3", 00:16:50.225 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:50.225 "is_configured": true, 00:16:50.225 "data_offset": 2048, 00:16:50.225 "data_size": 63488 00:16:50.225 }, 00:16:50.225 { 00:16:50.225 "name": "BaseBdev4", 00:16:50.225 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:50.225 "is_configured": true, 00:16:50.225 "data_offset": 2048, 00:16:50.225 "data_size": 63488 00:16:50.225 } 00:16:50.225 ] 00:16:50.225 }' 00:16:50.225 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.225 18:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.794 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.794 18:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.794 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:50.794 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.794 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:51.064 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 54aff7b6-42d8-11ef-9ade-d5fc5159efa5 00:16:51.321 [2024-07-15 18:30:43.678817] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:51.321 [2024-07-15 18:30:43.678907] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x76c59434f00 00:16:51.321 [2024-07-15 18:30:43.678914] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.321 [2024-07-15 18:30:43.678937] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x76c59497e20 00:16:51.321 [2024-07-15 18:30:43.678990] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x76c59434f00 00:16:51.321 [2024-07-15 18:30:43.678995] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x76c59434f00 00:16:51.321 [2024-07-15 18:30:43.679018] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.321 NewBaseBdev 00:16:51.321 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:51.321 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:51.321 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:51.321 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:51.321 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:51.321 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:51.321 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.579 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:51.837 [ 00:16:51.837 { 00:16:51.837 "name": "NewBaseBdev", 00:16:51.837 "aliases": [ 00:16:51.837 "54aff7b6-42d8-11ef-9ade-d5fc5159efa5" 00:16:51.837 ], 00:16:51.837 "product_name": "Malloc disk", 00:16:51.837 "block_size": 512, 00:16:51.837 "num_blocks": 65536, 00:16:51.837 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:51.837 "assigned_rate_limits": { 00:16:51.837 "rw_ios_per_sec": 0, 00:16:51.837 "rw_mbytes_per_sec": 0, 00:16:51.837 "r_mbytes_per_sec": 0, 00:16:51.837 "w_mbytes_per_sec": 0 00:16:51.837 }, 00:16:51.837 "claimed": true, 00:16:51.837 "claim_type": "exclusive_write", 00:16:51.837 "zoned": false, 00:16:51.837 "supported_io_types": { 00:16:51.837 "read": true, 00:16:51.837 "write": true, 00:16:51.837 "unmap": true, 00:16:51.837 "flush": true, 00:16:51.837 "reset": true, 00:16:51.837 "nvme_admin": false, 00:16:51.837 "nvme_io": false, 00:16:51.837 "nvme_io_md": false, 00:16:51.837 "write_zeroes": true, 00:16:51.837 "zcopy": true, 00:16:51.837 "get_zone_info": false, 00:16:51.837 "zone_management": false, 00:16:51.837 "zone_append": false, 00:16:51.837 "compare": false, 00:16:51.837 "compare_and_write": false, 00:16:51.837 "abort": true, 00:16:51.837 "seek_hole": false, 00:16:51.837 "seek_data": false, 00:16:51.837 "copy": true, 00:16:51.837 "nvme_iov_md": false 00:16:51.837 }, 00:16:51.837 "memory_domains": [ 00:16:51.837 { 00:16:51.837 "dma_device_id": "system", 00:16:51.837 "dma_device_type": 1 00:16:51.837 }, 00:16:51.837 { 00:16:51.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.837 "dma_device_type": 2 00:16:51.837 } 00:16:51.837 ], 00:16:51.837 "driver_specific": {} 00:16:51.837 } 00:16:51.837 ] 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.837 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.115 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.115 "name": "Existed_Raid", 00:16:52.115 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.115 "strip_size_kb": 0, 00:16:52.115 "state": "online", 00:16:52.115 "raid_level": "raid1", 00:16:52.115 "superblock": true, 00:16:52.115 "num_base_bdevs": 4, 00:16:52.115 "num_base_bdevs_discovered": 4, 00:16:52.115 "num_base_bdevs_operational": 4, 00:16:52.115 "base_bdevs_list": [ 00:16:52.115 { 00:16:52.115 "name": "NewBaseBdev", 00:16:52.115 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.115 "is_configured": true, 00:16:52.115 "data_offset": 2048, 00:16:52.115 "data_size": 63488 00:16:52.115 }, 00:16:52.115 { 00:16:52.115 "name": "BaseBdev2", 00:16:52.115 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.115 "is_configured": true, 00:16:52.115 "data_offset": 2048, 00:16:52.115 "data_size": 63488 00:16:52.115 }, 00:16:52.115 { 00:16:52.115 "name": "BaseBdev3", 00:16:52.115 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.115 "is_configured": true, 00:16:52.115 "data_offset": 2048, 00:16:52.115 "data_size": 63488 00:16:52.115 }, 00:16:52.115 { 00:16:52.115 "name": "BaseBdev4", 00:16:52.115 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.115 "is_configured": true, 00:16:52.115 "data_offset": 2048, 00:16:52.115 "data_size": 63488 00:16:52.115 } 00:16:52.115 ] 00:16:52.115 }' 00:16:52.115 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.115 18:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.690 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:52.690 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:52.690 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:52.690 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:52.690 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:52.690 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:52.690 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:52.690 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:52.690 [2024-07-15 18:30:45.026850] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.690 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:52.690 "name": "Existed_Raid", 00:16:52.690 "aliases": [ 00:16:52.690 "5371f2a2-42d8-11ef-9ade-d5fc5159efa5" 00:16:52.690 ], 00:16:52.690 "product_name": "Raid Volume", 00:16:52.690 "block_size": 512, 00:16:52.690 "num_blocks": 63488, 00:16:52.690 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.690 "assigned_rate_limits": { 00:16:52.690 "rw_ios_per_sec": 0, 00:16:52.690 "rw_mbytes_per_sec": 0, 00:16:52.690 "r_mbytes_per_sec": 0, 00:16:52.690 "w_mbytes_per_sec": 0 00:16:52.690 }, 00:16:52.690 "claimed": false, 00:16:52.690 "zoned": false, 00:16:52.690 "supported_io_types": { 00:16:52.690 "read": true, 00:16:52.690 "write": true, 00:16:52.690 "unmap": false, 00:16:52.690 "flush": false, 00:16:52.690 "reset": true, 00:16:52.690 "nvme_admin": false, 00:16:52.690 "nvme_io": false, 00:16:52.690 "nvme_io_md": false, 00:16:52.690 "write_zeroes": true, 00:16:52.690 "zcopy": false, 00:16:52.690 "get_zone_info": false, 00:16:52.690 "zone_management": false, 00:16:52.690 "zone_append": false, 00:16:52.690 "compare": false, 00:16:52.690 "compare_and_write": false, 00:16:52.690 "abort": false, 00:16:52.690 "seek_hole": false, 00:16:52.690 "seek_data": false, 00:16:52.690 "copy": false, 00:16:52.690 "nvme_iov_md": false 00:16:52.690 }, 00:16:52.690 "memory_domains": [ 00:16:52.690 { 00:16:52.690 "dma_device_id": "system", 00:16:52.690 "dma_device_type": 1 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.690 "dma_device_type": 2 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "dma_device_id": "system", 00:16:52.690 "dma_device_type": 1 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.690 "dma_device_type": 2 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "dma_device_id": "system", 00:16:52.690 "dma_device_type": 1 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.690 "dma_device_type": 2 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "dma_device_id": "system", 00:16:52.690 "dma_device_type": 1 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.690 "dma_device_type": 2 00:16:52.690 } 00:16:52.690 ], 00:16:52.690 "driver_specific": { 00:16:52.690 "raid": { 00:16:52.690 "uuid": "5371f2a2-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.690 "strip_size_kb": 0, 00:16:52.690 "state": "online", 00:16:52.690 "raid_level": "raid1", 00:16:52.690 "superblock": true, 00:16:52.690 "num_base_bdevs": 4, 00:16:52.690 "num_base_bdevs_discovered": 4, 00:16:52.690 "num_base_bdevs_operational": 4, 00:16:52.690 "base_bdevs_list": [ 00:16:52.690 { 00:16:52.690 "name": "NewBaseBdev", 00:16:52.690 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.690 "is_configured": true, 00:16:52.690 "data_offset": 2048, 00:16:52.690 "data_size": 63488 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "name": "BaseBdev2", 00:16:52.690 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.690 "is_configured": true, 00:16:52.690 "data_offset": 2048, 00:16:52.690 "data_size": 63488 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "name": "BaseBdev3", 00:16:52.690 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.690 "is_configured": true, 00:16:52.690 "data_offset": 2048, 00:16:52.690 "data_size": 63488 00:16:52.690 }, 00:16:52.690 { 00:16:52.690 "name": "BaseBdev4", 00:16:52.690 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.690 "is_configured": true, 00:16:52.690 "data_offset": 2048, 00:16:52.690 "data_size": 63488 00:16:52.690 } 00:16:52.690 ] 00:16:52.690 } 00:16:52.690 } 00:16:52.690 }' 00:16:52.690 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.690 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:52.690 BaseBdev2 00:16:52.690 BaseBdev3 00:16:52.690 BaseBdev4' 00:16:52.690 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.690 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:52.690 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.949 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.949 "name": "NewBaseBdev", 00:16:52.949 "aliases": [ 00:16:52.949 "54aff7b6-42d8-11ef-9ade-d5fc5159efa5" 00:16:52.949 ], 00:16:52.949 "product_name": "Malloc disk", 00:16:52.949 "block_size": 512, 00:16:52.949 "num_blocks": 65536, 00:16:52.949 "uuid": "54aff7b6-42d8-11ef-9ade-d5fc5159efa5", 00:16:52.949 "assigned_rate_limits": { 00:16:52.949 "rw_ios_per_sec": 0, 00:16:52.949 "rw_mbytes_per_sec": 0, 00:16:52.949 "r_mbytes_per_sec": 0, 00:16:52.949 "w_mbytes_per_sec": 0 00:16:52.949 }, 00:16:52.949 "claimed": true, 00:16:52.949 "claim_type": "exclusive_write", 00:16:52.949 "zoned": false, 00:16:52.950 "supported_io_types": { 00:16:52.950 "read": true, 00:16:52.950 "write": true, 00:16:52.950 "unmap": true, 00:16:52.950 "flush": true, 00:16:52.950 "reset": true, 00:16:52.950 "nvme_admin": false, 00:16:52.950 "nvme_io": false, 00:16:52.950 "nvme_io_md": false, 00:16:52.950 "write_zeroes": true, 00:16:52.950 "zcopy": true, 00:16:52.950 "get_zone_info": false, 00:16:52.950 "zone_management": false, 00:16:52.950 "zone_append": false, 00:16:52.950 "compare": false, 00:16:52.950 "compare_and_write": false, 00:16:52.950 "abort": true, 00:16:52.950 "seek_hole": false, 00:16:52.950 "seek_data": false, 00:16:52.950 "copy": true, 00:16:52.950 "nvme_iov_md": false 00:16:52.950 }, 00:16:52.950 "memory_domains": [ 00:16:52.950 { 00:16:52.950 "dma_device_id": "system", 00:16:52.950 "dma_device_type": 1 00:16:52.950 }, 00:16:52.950 { 00:16:52.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.950 "dma_device_type": 2 00:16:52.950 } 00:16:52.950 ], 00:16:52.950 "driver_specific": {} 00:16:52.950 }' 00:16:52.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.209 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.209 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.209 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.209 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.209 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.209 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:53.209 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.468 "name": "BaseBdev2", 00:16:53.468 "aliases": [ 00:16:53.468 "5217dad7-42d8-11ef-9ade-d5fc5159efa5" 00:16:53.468 ], 00:16:53.468 "product_name": "Malloc disk", 00:16:53.468 "block_size": 512, 00:16:53.468 "num_blocks": 65536, 00:16:53.468 "uuid": "5217dad7-42d8-11ef-9ade-d5fc5159efa5", 00:16:53.468 "assigned_rate_limits": { 00:16:53.468 "rw_ios_per_sec": 0, 00:16:53.468 "rw_mbytes_per_sec": 0, 00:16:53.468 "r_mbytes_per_sec": 0, 00:16:53.468 "w_mbytes_per_sec": 0 00:16:53.468 }, 00:16:53.468 "claimed": true, 00:16:53.468 "claim_type": "exclusive_write", 00:16:53.468 "zoned": false, 00:16:53.468 "supported_io_types": { 00:16:53.468 "read": true, 00:16:53.468 "write": true, 00:16:53.468 "unmap": true, 00:16:53.468 "flush": true, 00:16:53.468 "reset": true, 00:16:53.468 "nvme_admin": false, 00:16:53.468 "nvme_io": false, 00:16:53.468 "nvme_io_md": false, 00:16:53.468 "write_zeroes": true, 00:16:53.468 "zcopy": true, 00:16:53.468 "get_zone_info": false, 00:16:53.468 "zone_management": false, 00:16:53.468 "zone_append": false, 00:16:53.468 "compare": false, 00:16:53.468 "compare_and_write": false, 00:16:53.468 "abort": true, 00:16:53.468 "seek_hole": false, 00:16:53.468 "seek_data": false, 00:16:53.468 "copy": true, 00:16:53.468 "nvme_iov_md": false 00:16:53.468 }, 00:16:53.468 "memory_domains": [ 00:16:53.468 { 00:16:53.468 "dma_device_id": "system", 00:16:53.468 "dma_device_type": 1 00:16:53.468 }, 00:16:53.468 { 00:16:53.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.468 "dma_device_type": 2 00:16:53.468 } 00:16:53.468 ], 00:16:53.468 "driver_specific": {} 00:16:53.468 }' 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:53.468 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.727 "name": "BaseBdev3", 00:16:53.727 "aliases": [ 00:16:53.727 "52928b16-42d8-11ef-9ade-d5fc5159efa5" 00:16:53.727 ], 00:16:53.727 "product_name": "Malloc disk", 00:16:53.727 "block_size": 512, 00:16:53.727 "num_blocks": 65536, 00:16:53.727 "uuid": "52928b16-42d8-11ef-9ade-d5fc5159efa5", 00:16:53.727 "assigned_rate_limits": { 00:16:53.727 "rw_ios_per_sec": 0, 00:16:53.727 "rw_mbytes_per_sec": 0, 00:16:53.727 "r_mbytes_per_sec": 0, 00:16:53.727 "w_mbytes_per_sec": 0 00:16:53.727 }, 00:16:53.727 "claimed": true, 00:16:53.727 "claim_type": "exclusive_write", 00:16:53.727 "zoned": false, 00:16:53.727 "supported_io_types": { 00:16:53.727 "read": true, 00:16:53.727 "write": true, 00:16:53.727 "unmap": true, 00:16:53.727 "flush": true, 00:16:53.727 "reset": true, 00:16:53.727 "nvme_admin": false, 00:16:53.727 "nvme_io": false, 00:16:53.727 "nvme_io_md": false, 00:16:53.727 "write_zeroes": true, 00:16:53.727 "zcopy": true, 00:16:53.727 "get_zone_info": false, 00:16:53.727 "zone_management": false, 00:16:53.727 "zone_append": false, 00:16:53.727 "compare": false, 00:16:53.727 "compare_and_write": false, 00:16:53.727 "abort": true, 00:16:53.727 "seek_hole": false, 00:16:53.727 "seek_data": false, 00:16:53.727 "copy": true, 00:16:53.727 "nvme_iov_md": false 00:16:53.727 }, 00:16:53.727 "memory_domains": [ 00:16:53.727 { 00:16:53.727 "dma_device_id": "system", 00:16:53.727 "dma_device_type": 1 00:16:53.727 }, 00:16:53.727 { 00:16:53.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.727 "dma_device_type": 2 00:16:53.727 } 00:16:53.727 ], 00:16:53.727 "driver_specific": {} 00:16:53.727 }' 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.727 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:53.986 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.986 "name": "BaseBdev4", 00:16:53.986 "aliases": [ 00:16:53.986 "53037755-42d8-11ef-9ade-d5fc5159efa5" 00:16:53.986 ], 00:16:53.986 "product_name": "Malloc disk", 00:16:53.986 "block_size": 512, 00:16:53.986 "num_blocks": 65536, 00:16:53.986 "uuid": "53037755-42d8-11ef-9ade-d5fc5159efa5", 00:16:53.986 "assigned_rate_limits": { 00:16:53.986 "rw_ios_per_sec": 0, 00:16:53.986 "rw_mbytes_per_sec": 0, 00:16:53.986 "r_mbytes_per_sec": 0, 00:16:53.986 "w_mbytes_per_sec": 0 00:16:53.986 }, 00:16:53.986 "claimed": true, 00:16:53.986 "claim_type": "exclusive_write", 00:16:53.986 "zoned": false, 00:16:53.986 "supported_io_types": { 00:16:53.986 "read": true, 00:16:53.986 "write": true, 00:16:53.986 "unmap": true, 00:16:53.986 "flush": true, 00:16:53.986 "reset": true, 00:16:53.986 "nvme_admin": false, 00:16:53.986 "nvme_io": false, 00:16:53.986 "nvme_io_md": false, 00:16:53.986 "write_zeroes": true, 00:16:53.986 "zcopy": true, 00:16:53.986 "get_zone_info": false, 00:16:53.986 "zone_management": false, 00:16:53.986 "zone_append": false, 00:16:53.986 "compare": false, 00:16:53.986 "compare_and_write": false, 00:16:53.986 "abort": true, 00:16:53.986 "seek_hole": false, 00:16:53.986 "seek_data": false, 00:16:53.986 "copy": true, 00:16:53.986 "nvme_iov_md": false 00:16:53.986 }, 00:16:53.986 "memory_domains": [ 00:16:53.986 { 00:16:53.986 "dma_device_id": "system", 00:16:53.986 "dma_device_type": 1 00:16:53.986 }, 00:16:53.986 { 00:16:53.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.986 "dma_device_type": 2 00:16:53.986 } 00:16:53.986 ], 00:16:53.986 "driver_specific": {} 00:16:53.986 }' 00:16:53.986 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.245 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.504 [2024-07-15 18:30:46.726985] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.504 [2024-07-15 18:30:46.727015] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.504 [2024-07-15 18:30:46.727043] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.504 [2024-07-15 18:30:46.727122] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.504 [2024-07-15 18:30:46.727150] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x76c59434f00 name Existed_Raid, state offline 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 63874 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 63874 ']' 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 63874 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 63874 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63874' 00:16:54.504 killing process with pid 63874 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 63874 00:16:54.504 [2024-07-15 18:30:46.756460] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.504 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 63874 00:16:54.504 [2024-07-15 18:30:46.783962] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.763 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:54.763 00:16:54.763 real 0m27.897s 00:16:54.763 user 0m51.058s 00:16:54.763 sys 0m3.828s 00:16:54.763 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.763 ************************************ 00:16:54.763 END TEST raid_state_function_test_sb 00:16:54.763 ************************************ 00:16:54.763 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.763 18:30:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:54.763 18:30:47 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:54.763 18:30:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:54.763 18:30:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.763 18:30:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.763 ************************************ 00:16:54.763 START TEST raid_superblock_test 00:16:54.763 ************************************ 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=64696 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 64696 /var/tmp/spdk-raid.sock 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 64696 ']' 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.763 18:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.763 [2024-07-15 18:30:47.044026] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:16:54.763 [2024-07-15 18:30:47.044197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:55.331 EAL: TSC is not safe to use in SMP mode 00:16:55.331 EAL: TSC is not invariant 00:16:55.331 [2024-07-15 18:30:47.636325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.331 [2024-07-15 18:30:47.720053] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:55.331 [2024-07-15 18:30:47.722529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.331 [2024-07-15 18:30:47.723345] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.331 [2024-07-15 18:30:47.723360] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:55.929 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:56.189 malloc1 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:56.189 [2024-07-15 18:30:48.562232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:56.189 [2024-07-15 18:30:48.562308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.189 [2024-07-15 18:30:48.562322] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da34780 00:16:56.189 [2024-07-15 18:30:48.562331] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.189 [2024-07-15 18:30:48.563354] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.189 [2024-07-15 18:30:48.563380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:56.189 pt1 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.189 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:56.474 malloc2 00:16:56.474 18:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.733 [2024-07-15 18:30:49.082288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.733 [2024-07-15 18:30:49.082358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.733 [2024-07-15 18:30:49.082371] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da34c80 00:16:56.733 [2024-07-15 18:30:49.082379] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.733 [2024-07-15 18:30:49.083134] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.733 [2024-07-15 18:30:49.083161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.733 pt2 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.733 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:56.990 malloc3 00:16:57.248 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.248 [2024-07-15 18:30:49.614341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.248 [2024-07-15 18:30:49.614406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.248 [2024-07-15 18:30:49.614418] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da35180 00:16:57.248 [2024-07-15 18:30:49.614427] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.248 [2024-07-15 18:30:49.615242] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.248 [2024-07-15 18:30:49.615269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.248 pt3 00:16:57.248 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:57.248 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:57.248 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:57.248 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:57.248 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:57.248 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.249 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.249 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.249 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:57.507 malloc4 00:16:57.507 18:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:57.767 [2024-07-15 18:30:50.094384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:57.767 [2024-07-15 18:30:50.094450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.767 [2024-07-15 18:30:50.094463] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da35680 00:16:57.767 [2024-07-15 18:30:50.094471] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.767 [2024-07-15 18:30:50.095250] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.767 [2024-07-15 18:30:50.095273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:57.767 pt4 00:16:57.767 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:57.767 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:57.767 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:58.026 [2024-07-15 18:30:50.370424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.026 [2024-07-15 18:30:50.371110] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.026 [2024-07-15 18:30:50.371132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:58.026 [2024-07-15 18:30:50.371144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:58.026 [2024-07-15 18:30:50.371216] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e154da35900 00:16:58.026 [2024-07-15 18:30:50.371223] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:58.026 [2024-07-15 18:30:50.371261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e154da97e20 00:16:58.026 [2024-07-15 18:30:50.371339] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e154da35900 00:16:58.026 [2024-07-15 18:30:50.371343] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e154da35900 00:16:58.026 [2024-07-15 18:30:50.371373] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.026 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.593 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.593 "name": "raid_bdev1", 00:16:58.593 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:16:58.593 "strip_size_kb": 0, 00:16:58.593 "state": "online", 00:16:58.593 "raid_level": "raid1", 00:16:58.593 "superblock": true, 00:16:58.593 "num_base_bdevs": 4, 00:16:58.593 "num_base_bdevs_discovered": 4, 00:16:58.593 "num_base_bdevs_operational": 4, 00:16:58.593 "base_bdevs_list": [ 00:16:58.593 { 00:16:58.593 "name": "pt1", 00:16:58.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.593 "is_configured": true, 00:16:58.593 "data_offset": 2048, 00:16:58.593 "data_size": 63488 00:16:58.593 }, 00:16:58.593 { 00:16:58.593 "name": "pt2", 00:16:58.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.593 "is_configured": true, 00:16:58.593 "data_offset": 2048, 00:16:58.593 "data_size": 63488 00:16:58.593 }, 00:16:58.593 { 00:16:58.593 "name": "pt3", 00:16:58.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.593 "is_configured": true, 00:16:58.593 "data_offset": 2048, 00:16:58.593 "data_size": 63488 00:16:58.593 }, 00:16:58.593 { 00:16:58.593 "name": "pt4", 00:16:58.593 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.593 "is_configured": true, 00:16:58.593 "data_offset": 2048, 00:16:58.593 "data_size": 63488 00:16:58.593 } 00:16:58.593 ] 00:16:58.593 }' 00:16:58.593 18:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.593 18:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.852 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.852 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:58.852 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:58.852 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:58.852 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:58.852 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:58.852 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:58.852 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:59.111 [2024-07-15 18:30:51.342574] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.111 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:59.111 "name": "raid_bdev1", 00:16:59.111 "aliases": [ 00:16:59.111 "5c7aee34-42d8-11ef-9ade-d5fc5159efa5" 00:16:59.111 ], 00:16:59.111 "product_name": "Raid Volume", 00:16:59.111 "block_size": 512, 00:16:59.111 "num_blocks": 63488, 00:16:59.111 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:16:59.111 "assigned_rate_limits": { 00:16:59.111 "rw_ios_per_sec": 0, 00:16:59.111 "rw_mbytes_per_sec": 0, 00:16:59.111 "r_mbytes_per_sec": 0, 00:16:59.111 "w_mbytes_per_sec": 0 00:16:59.111 }, 00:16:59.111 "claimed": false, 00:16:59.111 "zoned": false, 00:16:59.111 "supported_io_types": { 00:16:59.111 "read": true, 00:16:59.111 "write": true, 00:16:59.111 "unmap": false, 00:16:59.111 "flush": false, 00:16:59.111 "reset": true, 00:16:59.111 "nvme_admin": false, 00:16:59.111 "nvme_io": false, 00:16:59.111 "nvme_io_md": false, 00:16:59.111 "write_zeroes": true, 00:16:59.111 "zcopy": false, 00:16:59.111 "get_zone_info": false, 00:16:59.111 "zone_management": false, 00:16:59.111 "zone_append": false, 00:16:59.111 "compare": false, 00:16:59.111 "compare_and_write": false, 00:16:59.111 "abort": false, 00:16:59.111 "seek_hole": false, 00:16:59.111 "seek_data": false, 00:16:59.111 "copy": false, 00:16:59.111 "nvme_iov_md": false 00:16:59.111 }, 00:16:59.111 "memory_domains": [ 00:16:59.111 { 00:16:59.111 "dma_device_id": "system", 00:16:59.111 "dma_device_type": 1 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.111 "dma_device_type": 2 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "dma_device_id": "system", 00:16:59.111 "dma_device_type": 1 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.111 "dma_device_type": 2 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "dma_device_id": "system", 00:16:59.111 "dma_device_type": 1 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.111 "dma_device_type": 2 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "dma_device_id": "system", 00:16:59.111 "dma_device_type": 1 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.111 "dma_device_type": 2 00:16:59.111 } 00:16:59.111 ], 00:16:59.111 "driver_specific": { 00:16:59.111 "raid": { 00:16:59.111 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:16:59.111 "strip_size_kb": 0, 00:16:59.111 "state": "online", 00:16:59.111 "raid_level": "raid1", 00:16:59.111 "superblock": true, 00:16:59.111 "num_base_bdevs": 4, 00:16:59.111 "num_base_bdevs_discovered": 4, 00:16:59.111 "num_base_bdevs_operational": 4, 00:16:59.111 "base_bdevs_list": [ 00:16:59.111 { 00:16:59.111 "name": "pt1", 00:16:59.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.111 "is_configured": true, 00:16:59.111 "data_offset": 2048, 00:16:59.111 "data_size": 63488 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "name": "pt2", 00:16:59.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.111 "is_configured": true, 00:16:59.111 "data_offset": 2048, 00:16:59.111 "data_size": 63488 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "name": "pt3", 00:16:59.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.111 "is_configured": true, 00:16:59.111 "data_offset": 2048, 00:16:59.111 "data_size": 63488 00:16:59.111 }, 00:16:59.111 { 00:16:59.111 "name": "pt4", 00:16:59.111 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.111 "is_configured": true, 00:16:59.111 "data_offset": 2048, 00:16:59.111 "data_size": 63488 00:16:59.111 } 00:16:59.111 ] 00:16:59.111 } 00:16:59.111 } 00:16:59.111 }' 00:16:59.111 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.111 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:59.111 pt2 00:16:59.111 pt3 00:16:59.111 pt4' 00:16:59.111 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.111 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:59.111 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.370 "name": "pt1", 00:16:59.370 "aliases": [ 00:16:59.370 "00000000-0000-0000-0000-000000000001" 00:16:59.370 ], 00:16:59.370 "product_name": "passthru", 00:16:59.370 "block_size": 512, 00:16:59.370 "num_blocks": 65536, 00:16:59.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.370 "assigned_rate_limits": { 00:16:59.370 "rw_ios_per_sec": 0, 00:16:59.370 "rw_mbytes_per_sec": 0, 00:16:59.370 "r_mbytes_per_sec": 0, 00:16:59.370 "w_mbytes_per_sec": 0 00:16:59.370 }, 00:16:59.370 "claimed": true, 00:16:59.370 "claim_type": "exclusive_write", 00:16:59.370 "zoned": false, 00:16:59.370 "supported_io_types": { 00:16:59.370 "read": true, 00:16:59.370 "write": true, 00:16:59.370 "unmap": true, 00:16:59.370 "flush": true, 00:16:59.370 "reset": true, 00:16:59.370 "nvme_admin": false, 00:16:59.370 "nvme_io": false, 00:16:59.370 "nvme_io_md": false, 00:16:59.370 "write_zeroes": true, 00:16:59.370 "zcopy": true, 00:16:59.370 "get_zone_info": false, 00:16:59.370 "zone_management": false, 00:16:59.370 "zone_append": false, 00:16:59.370 "compare": false, 00:16:59.370 "compare_and_write": false, 00:16:59.370 "abort": true, 00:16:59.370 "seek_hole": false, 00:16:59.370 "seek_data": false, 00:16:59.370 "copy": true, 00:16:59.370 "nvme_iov_md": false 00:16:59.370 }, 00:16:59.370 "memory_domains": [ 00:16:59.370 { 00:16:59.370 "dma_device_id": "system", 00:16:59.370 "dma_device_type": 1 00:16:59.370 }, 00:16:59.370 { 00:16:59.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.370 "dma_device_type": 2 00:16:59.370 } 00:16:59.370 ], 00:16:59.370 "driver_specific": { 00:16:59.370 "passthru": { 00:16:59.370 "name": "pt1", 00:16:59.370 "base_bdev_name": "malloc1" 00:16:59.370 } 00:16:59.370 } 00:16:59.370 }' 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:59.370 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.628 "name": "pt2", 00:16:59.628 "aliases": [ 00:16:59.628 "00000000-0000-0000-0000-000000000002" 00:16:59.628 ], 00:16:59.628 "product_name": "passthru", 00:16:59.628 "block_size": 512, 00:16:59.628 "num_blocks": 65536, 00:16:59.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.628 "assigned_rate_limits": { 00:16:59.628 "rw_ios_per_sec": 0, 00:16:59.628 "rw_mbytes_per_sec": 0, 00:16:59.628 "r_mbytes_per_sec": 0, 00:16:59.628 "w_mbytes_per_sec": 0 00:16:59.628 }, 00:16:59.628 "claimed": true, 00:16:59.628 "claim_type": "exclusive_write", 00:16:59.628 "zoned": false, 00:16:59.628 "supported_io_types": { 00:16:59.628 "read": true, 00:16:59.628 "write": true, 00:16:59.628 "unmap": true, 00:16:59.628 "flush": true, 00:16:59.628 "reset": true, 00:16:59.628 "nvme_admin": false, 00:16:59.628 "nvme_io": false, 00:16:59.628 "nvme_io_md": false, 00:16:59.628 "write_zeroes": true, 00:16:59.628 "zcopy": true, 00:16:59.628 "get_zone_info": false, 00:16:59.628 "zone_management": false, 00:16:59.628 "zone_append": false, 00:16:59.628 "compare": false, 00:16:59.628 "compare_and_write": false, 00:16:59.628 "abort": true, 00:16:59.628 "seek_hole": false, 00:16:59.628 "seek_data": false, 00:16:59.628 "copy": true, 00:16:59.628 "nvme_iov_md": false 00:16:59.628 }, 00:16:59.628 "memory_domains": [ 00:16:59.628 { 00:16:59.628 "dma_device_id": "system", 00:16:59.628 "dma_device_type": 1 00:16:59.628 }, 00:16:59.628 { 00:16:59.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.628 "dma_device_type": 2 00:16:59.628 } 00:16:59.628 ], 00:16:59.628 "driver_specific": { 00:16:59.628 "passthru": { 00:16:59.628 "name": "pt2", 00:16:59.628 "base_bdev_name": "malloc2" 00:16:59.628 } 00:16:59.628 } 00:16:59.628 }' 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.628 18:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:59.886 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.886 "name": "pt3", 00:16:59.886 "aliases": [ 00:16:59.886 "00000000-0000-0000-0000-000000000003" 00:16:59.886 ], 00:16:59.886 "product_name": "passthru", 00:16:59.886 "block_size": 512, 00:16:59.886 "num_blocks": 65536, 00:16:59.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.886 "assigned_rate_limits": { 00:16:59.886 "rw_ios_per_sec": 0, 00:16:59.886 "rw_mbytes_per_sec": 0, 00:16:59.886 "r_mbytes_per_sec": 0, 00:16:59.886 "w_mbytes_per_sec": 0 00:16:59.886 }, 00:16:59.886 "claimed": true, 00:16:59.886 "claim_type": "exclusive_write", 00:16:59.886 "zoned": false, 00:16:59.886 "supported_io_types": { 00:16:59.886 "read": true, 00:16:59.886 "write": true, 00:16:59.886 "unmap": true, 00:16:59.886 "flush": true, 00:16:59.886 "reset": true, 00:16:59.886 "nvme_admin": false, 00:16:59.886 "nvme_io": false, 00:16:59.886 "nvme_io_md": false, 00:16:59.886 "write_zeroes": true, 00:16:59.886 "zcopy": true, 00:16:59.886 "get_zone_info": false, 00:16:59.886 "zone_management": false, 00:16:59.886 "zone_append": false, 00:16:59.886 "compare": false, 00:16:59.886 "compare_and_write": false, 00:16:59.886 "abort": true, 00:16:59.886 "seek_hole": false, 00:16:59.886 "seek_data": false, 00:16:59.886 "copy": true, 00:16:59.886 "nvme_iov_md": false 00:16:59.886 }, 00:16:59.886 "memory_domains": [ 00:16:59.886 { 00:16:59.886 "dma_device_id": "system", 00:16:59.886 "dma_device_type": 1 00:16:59.886 }, 00:16:59.886 { 00:16:59.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.886 "dma_device_type": 2 00:16:59.886 } 00:16:59.886 ], 00:16:59.886 "driver_specific": { 00:16:59.886 "passthru": { 00:16:59.886 "name": "pt3", 00:16:59.886 "base_bdev_name": "malloc3" 00:16:59.886 } 00:16:59.886 } 00:16:59.886 }' 00:16:59.886 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.886 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.886 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:00.144 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.402 "name": "pt4", 00:17:00.402 "aliases": [ 00:17:00.402 "00000000-0000-0000-0000-000000000004" 00:17:00.402 ], 00:17:00.402 "product_name": "passthru", 00:17:00.402 "block_size": 512, 00:17:00.402 "num_blocks": 65536, 00:17:00.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:00.402 "assigned_rate_limits": { 00:17:00.402 "rw_ios_per_sec": 0, 00:17:00.402 "rw_mbytes_per_sec": 0, 00:17:00.402 "r_mbytes_per_sec": 0, 00:17:00.402 "w_mbytes_per_sec": 0 00:17:00.402 }, 00:17:00.402 "claimed": true, 00:17:00.402 "claim_type": "exclusive_write", 00:17:00.402 "zoned": false, 00:17:00.402 "supported_io_types": { 00:17:00.402 "read": true, 00:17:00.402 "write": true, 00:17:00.402 "unmap": true, 00:17:00.402 "flush": true, 00:17:00.402 "reset": true, 00:17:00.402 "nvme_admin": false, 00:17:00.402 "nvme_io": false, 00:17:00.402 "nvme_io_md": false, 00:17:00.402 "write_zeroes": true, 00:17:00.402 "zcopy": true, 00:17:00.402 "get_zone_info": false, 00:17:00.402 "zone_management": false, 00:17:00.402 "zone_append": false, 00:17:00.402 "compare": false, 00:17:00.402 "compare_and_write": false, 00:17:00.402 "abort": true, 00:17:00.402 "seek_hole": false, 00:17:00.402 "seek_data": false, 00:17:00.402 "copy": true, 00:17:00.402 "nvme_iov_md": false 00:17:00.402 }, 00:17:00.402 "memory_domains": [ 00:17:00.402 { 00:17:00.402 "dma_device_id": "system", 00:17:00.402 "dma_device_type": 1 00:17:00.402 }, 00:17:00.402 { 00:17:00.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.402 "dma_device_type": 2 00:17:00.402 } 00:17:00.402 ], 00:17:00.402 "driver_specific": { 00:17:00.402 "passthru": { 00:17:00.402 "name": "pt4", 00:17:00.402 "base_bdev_name": "malloc4" 00:17:00.402 } 00:17:00.402 } 00:17:00.402 }' 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:00.402 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:00.661 [2024-07-15 18:30:52.882703] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.661 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=5c7aee34-42d8-11ef-9ade-d5fc5159efa5 00:17:00.661 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 5c7aee34-42d8-11ef-9ade-d5fc5159efa5 ']' 00:17:00.661 18:30:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:00.919 [2024-07-15 18:30:53.122643] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.919 [2024-07-15 18:30:53.122672] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.919 [2024-07-15 18:30:53.122699] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.920 [2024-07-15 18:30:53.122723] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.920 [2024-07-15 18:30:53.122728] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e154da35900 name raid_bdev1, state offline 00:17:00.920 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.920 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:01.178 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:01.178 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:01.178 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.178 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:01.436 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.436 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:01.694 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.694 18:30:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:01.952 18:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.952 18:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:02.210 18:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:02.210 18:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:02.469 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:02.728 [2024-07-15 18:30:54.958924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:02.728 [2024-07-15 18:30:54.959672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:02.728 [2024-07-15 18:30:54.959691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:02.728 [2024-07-15 18:30:54.959700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:02.728 [2024-07-15 18:30:54.959717] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:02.728 [2024-07-15 18:30:54.959763] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:02.728 [2024-07-15 18:30:54.959774] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:02.728 [2024-07-15 18:30:54.959784] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:02.728 [2024-07-15 18:30:54.959792] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.728 [2024-07-15 18:30:54.959796] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e154da35680 name raid_bdev1, state configuring 00:17:02.728 request: 00:17:02.728 { 00:17:02.728 "name": "raid_bdev1", 00:17:02.728 "raid_level": "raid1", 00:17:02.728 "base_bdevs": [ 00:17:02.728 "malloc1", 00:17:02.728 "malloc2", 00:17:02.728 "malloc3", 00:17:02.728 "malloc4" 00:17:02.728 ], 00:17:02.728 "superblock": false, 00:17:02.728 "method": "bdev_raid_create", 00:17:02.728 "req_id": 1 00:17:02.728 } 00:17:02.728 Got JSON-RPC error response 00:17:02.728 response: 00:17:02.728 { 00:17:02.728 "code": -17, 00:17:02.728 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:02.728 } 00:17:02.728 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:02.728 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.728 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.728 18:30:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.728 18:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.728 18:30:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:02.987 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:02.987 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:02.987 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.246 [2024-07-15 18:30:55.434989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.246 [2024-07-15 18:30:55.435064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.246 [2024-07-15 18:30:55.435077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da35180 00:17:03.246 [2024-07-15 18:30:55.435087] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.246 [2024-07-15 18:30:55.435896] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.246 [2024-07-15 18:30:55.435920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.246 [2024-07-15 18:30:55.435950] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:03.246 [2024-07-15 18:30:55.435963] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.246 pt1 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.246 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.505 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.505 "name": "raid_bdev1", 00:17:03.505 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:03.505 "strip_size_kb": 0, 00:17:03.505 "state": "configuring", 00:17:03.505 "raid_level": "raid1", 00:17:03.505 "superblock": true, 00:17:03.505 "num_base_bdevs": 4, 00:17:03.505 "num_base_bdevs_discovered": 1, 00:17:03.505 "num_base_bdevs_operational": 4, 00:17:03.505 "base_bdevs_list": [ 00:17:03.505 { 00:17:03.505 "name": "pt1", 00:17:03.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.505 "is_configured": true, 00:17:03.505 "data_offset": 2048, 00:17:03.505 "data_size": 63488 00:17:03.505 }, 00:17:03.505 { 00:17:03.505 "name": null, 00:17:03.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.505 "is_configured": false, 00:17:03.505 "data_offset": 2048, 00:17:03.505 "data_size": 63488 00:17:03.505 }, 00:17:03.505 { 00:17:03.505 "name": null, 00:17:03.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.505 "is_configured": false, 00:17:03.505 "data_offset": 2048, 00:17:03.505 "data_size": 63488 00:17:03.505 }, 00:17:03.505 { 00:17:03.505 "name": null, 00:17:03.505 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.505 "is_configured": false, 00:17:03.505 "data_offset": 2048, 00:17:03.505 "data_size": 63488 00:17:03.505 } 00:17:03.505 ] 00:17:03.505 }' 00:17:03.505 18:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.505 18:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.763 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:17:03.763 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.021 [2024-07-15 18:30:56.331118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.021 [2024-07-15 18:30:56.331199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.021 [2024-07-15 18:30:56.331212] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da34780 00:17:04.021 [2024-07-15 18:30:56.331220] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.021 [2024-07-15 18:30:56.331349] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.021 [2024-07-15 18:30:56.331366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.021 [2024-07-15 18:30:56.331395] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:04.021 [2024-07-15 18:30:56.331404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.021 pt2 00:17:04.021 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:04.279 [2024-07-15 18:30:56.603148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.279 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.280 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.537 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.537 "name": "raid_bdev1", 00:17:04.537 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:04.537 "strip_size_kb": 0, 00:17:04.537 "state": "configuring", 00:17:04.537 "raid_level": "raid1", 00:17:04.537 "superblock": true, 00:17:04.538 "num_base_bdevs": 4, 00:17:04.538 "num_base_bdevs_discovered": 1, 00:17:04.538 "num_base_bdevs_operational": 4, 00:17:04.538 "base_bdevs_list": [ 00:17:04.538 { 00:17:04.538 "name": "pt1", 00:17:04.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.538 "is_configured": true, 00:17:04.538 "data_offset": 2048, 00:17:04.538 "data_size": 63488 00:17:04.538 }, 00:17:04.538 { 00:17:04.538 "name": null, 00:17:04.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.538 "is_configured": false, 00:17:04.538 "data_offset": 2048, 00:17:04.538 "data_size": 63488 00:17:04.538 }, 00:17:04.538 { 00:17:04.538 "name": null, 00:17:04.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.538 "is_configured": false, 00:17:04.538 "data_offset": 2048, 00:17:04.538 "data_size": 63488 00:17:04.538 }, 00:17:04.538 { 00:17:04.538 "name": null, 00:17:04.538 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.538 "is_configured": false, 00:17:04.538 "data_offset": 2048, 00:17:04.538 "data_size": 63488 00:17:04.538 } 00:17:04.538 ] 00:17:04.538 }' 00:17:04.538 18:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.538 18:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.105 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:05.105 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:05.105 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.105 [2024-07-15 18:30:57.463211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.105 [2024-07-15 18:30:57.463279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.105 [2024-07-15 18:30:57.463292] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da34780 00:17:05.105 [2024-07-15 18:30:57.463300] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.105 [2024-07-15 18:30:57.463432] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.105 [2024-07-15 18:30:57.463444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.105 [2024-07-15 18:30:57.463472] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:05.105 [2024-07-15 18:30:57.463482] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.105 pt2 00:17:05.105 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:05.105 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:05.105 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:05.364 [2024-07-15 18:30:57.751224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:05.364 [2024-07-15 18:30:57.751280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.364 [2024-07-15 18:30:57.751303] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da35b80 00:17:05.364 [2024-07-15 18:30:57.751312] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.364 [2024-07-15 18:30:57.751436] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.364 [2024-07-15 18:30:57.751448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:05.364 [2024-07-15 18:30:57.751483] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:05.364 [2024-07-15 18:30:57.751492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:05.364 pt3 00:17:05.622 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:05.622 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:05.622 18:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:05.881 [2024-07-15 18:30:58.031251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:05.881 [2024-07-15 18:30:58.031314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.881 [2024-07-15 18:30:58.031328] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da35900 00:17:05.881 [2024-07-15 18:30:58.031336] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.881 [2024-07-15 18:30:58.031466] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.881 [2024-07-15 18:30:58.031478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:05.881 [2024-07-15 18:30:58.031506] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:05.881 [2024-07-15 18:30:58.031516] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:05.881 [2024-07-15 18:30:58.031557] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e154da34c80 00:17:05.881 [2024-07-15 18:30:58.031563] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:05.881 [2024-07-15 18:30:58.031585] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e154da97e20 00:17:05.881 [2024-07-15 18:30:58.031656] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e154da34c80 00:17:05.881 [2024-07-15 18:30:58.031661] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e154da34c80 00:17:05.881 [2024-07-15 18:30:58.031684] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.881 pt4 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.881 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.139 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.139 "name": "raid_bdev1", 00:17:06.139 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:06.139 "strip_size_kb": 0, 00:17:06.139 "state": "online", 00:17:06.139 "raid_level": "raid1", 00:17:06.139 "superblock": true, 00:17:06.139 "num_base_bdevs": 4, 00:17:06.139 "num_base_bdevs_discovered": 4, 00:17:06.139 "num_base_bdevs_operational": 4, 00:17:06.139 "base_bdevs_list": [ 00:17:06.139 { 00:17:06.139 "name": "pt1", 00:17:06.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.139 "is_configured": true, 00:17:06.139 "data_offset": 2048, 00:17:06.139 "data_size": 63488 00:17:06.139 }, 00:17:06.139 { 00:17:06.139 "name": "pt2", 00:17:06.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.139 "is_configured": true, 00:17:06.139 "data_offset": 2048, 00:17:06.139 "data_size": 63488 00:17:06.139 }, 00:17:06.139 { 00:17:06.139 "name": "pt3", 00:17:06.139 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.139 "is_configured": true, 00:17:06.139 "data_offset": 2048, 00:17:06.139 "data_size": 63488 00:17:06.139 }, 00:17:06.139 { 00:17:06.139 "name": "pt4", 00:17:06.139 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.139 "is_configured": true, 00:17:06.139 "data_offset": 2048, 00:17:06.139 "data_size": 63488 00:17:06.139 } 00:17:06.139 ] 00:17:06.139 }' 00:17:06.139 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.139 18:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.397 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:06.397 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:06.397 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:06.397 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:06.397 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:06.397 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:06.397 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:06.397 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:06.655 [2024-07-15 18:30:58.847361] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.655 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:06.655 "name": "raid_bdev1", 00:17:06.655 "aliases": [ 00:17:06.655 "5c7aee34-42d8-11ef-9ade-d5fc5159efa5" 00:17:06.655 ], 00:17:06.655 "product_name": "Raid Volume", 00:17:06.655 "block_size": 512, 00:17:06.655 "num_blocks": 63488, 00:17:06.655 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:06.655 "assigned_rate_limits": { 00:17:06.655 "rw_ios_per_sec": 0, 00:17:06.655 "rw_mbytes_per_sec": 0, 00:17:06.655 "r_mbytes_per_sec": 0, 00:17:06.655 "w_mbytes_per_sec": 0 00:17:06.655 }, 00:17:06.655 "claimed": false, 00:17:06.655 "zoned": false, 00:17:06.655 "supported_io_types": { 00:17:06.655 "read": true, 00:17:06.655 "write": true, 00:17:06.655 "unmap": false, 00:17:06.655 "flush": false, 00:17:06.655 "reset": true, 00:17:06.655 "nvme_admin": false, 00:17:06.655 "nvme_io": false, 00:17:06.655 "nvme_io_md": false, 00:17:06.655 "write_zeroes": true, 00:17:06.655 "zcopy": false, 00:17:06.655 "get_zone_info": false, 00:17:06.655 "zone_management": false, 00:17:06.655 "zone_append": false, 00:17:06.655 "compare": false, 00:17:06.655 "compare_and_write": false, 00:17:06.655 "abort": false, 00:17:06.655 "seek_hole": false, 00:17:06.655 "seek_data": false, 00:17:06.655 "copy": false, 00:17:06.655 "nvme_iov_md": false 00:17:06.655 }, 00:17:06.655 "memory_domains": [ 00:17:06.655 { 00:17:06.655 "dma_device_id": "system", 00:17:06.655 "dma_device_type": 1 00:17:06.655 }, 00:17:06.655 { 00:17:06.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.655 "dma_device_type": 2 00:17:06.655 }, 00:17:06.655 { 00:17:06.655 "dma_device_id": "system", 00:17:06.655 "dma_device_type": 1 00:17:06.655 }, 00:17:06.655 { 00:17:06.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.655 "dma_device_type": 2 00:17:06.655 }, 00:17:06.655 { 00:17:06.655 "dma_device_id": "system", 00:17:06.655 "dma_device_type": 1 00:17:06.655 }, 00:17:06.655 { 00:17:06.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.655 "dma_device_type": 2 00:17:06.655 }, 00:17:06.655 { 00:17:06.655 "dma_device_id": "system", 00:17:06.655 "dma_device_type": 1 00:17:06.655 }, 00:17:06.656 { 00:17:06.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.656 "dma_device_type": 2 00:17:06.656 } 00:17:06.656 ], 00:17:06.656 "driver_specific": { 00:17:06.656 "raid": { 00:17:06.656 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:06.656 "strip_size_kb": 0, 00:17:06.656 "state": "online", 00:17:06.656 "raid_level": "raid1", 00:17:06.656 "superblock": true, 00:17:06.656 "num_base_bdevs": 4, 00:17:06.656 "num_base_bdevs_discovered": 4, 00:17:06.656 "num_base_bdevs_operational": 4, 00:17:06.656 "base_bdevs_list": [ 00:17:06.656 { 00:17:06.656 "name": "pt1", 00:17:06.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.656 "is_configured": true, 00:17:06.656 "data_offset": 2048, 00:17:06.656 "data_size": 63488 00:17:06.656 }, 00:17:06.656 { 00:17:06.656 "name": "pt2", 00:17:06.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.656 "is_configured": true, 00:17:06.656 "data_offset": 2048, 00:17:06.656 "data_size": 63488 00:17:06.656 }, 00:17:06.656 { 00:17:06.656 "name": "pt3", 00:17:06.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.656 "is_configured": true, 00:17:06.656 "data_offset": 2048, 00:17:06.656 "data_size": 63488 00:17:06.656 }, 00:17:06.656 { 00:17:06.656 "name": "pt4", 00:17:06.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.656 "is_configured": true, 00:17:06.656 "data_offset": 2048, 00:17:06.656 "data_size": 63488 00:17:06.656 } 00:17:06.656 ] 00:17:06.656 } 00:17:06.656 } 00:17:06.656 }' 00:17:06.656 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.656 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:06.656 pt2 00:17:06.656 pt3 00:17:06.656 pt4' 00:17:06.656 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.656 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:06.656 18:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.913 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.913 "name": "pt1", 00:17:06.913 "aliases": [ 00:17:06.913 "00000000-0000-0000-0000-000000000001" 00:17:06.913 ], 00:17:06.913 "product_name": "passthru", 00:17:06.913 "block_size": 512, 00:17:06.913 "num_blocks": 65536, 00:17:06.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.913 "assigned_rate_limits": { 00:17:06.913 "rw_ios_per_sec": 0, 00:17:06.913 "rw_mbytes_per_sec": 0, 00:17:06.913 "r_mbytes_per_sec": 0, 00:17:06.913 "w_mbytes_per_sec": 0 00:17:06.913 }, 00:17:06.913 "claimed": true, 00:17:06.913 "claim_type": "exclusive_write", 00:17:06.913 "zoned": false, 00:17:06.914 "supported_io_types": { 00:17:06.914 "read": true, 00:17:06.914 "write": true, 00:17:06.914 "unmap": true, 00:17:06.914 "flush": true, 00:17:06.914 "reset": true, 00:17:06.914 "nvme_admin": false, 00:17:06.914 "nvme_io": false, 00:17:06.914 "nvme_io_md": false, 00:17:06.914 "write_zeroes": true, 00:17:06.914 "zcopy": true, 00:17:06.914 "get_zone_info": false, 00:17:06.914 "zone_management": false, 00:17:06.914 "zone_append": false, 00:17:06.914 "compare": false, 00:17:06.914 "compare_and_write": false, 00:17:06.914 "abort": true, 00:17:06.914 "seek_hole": false, 00:17:06.914 "seek_data": false, 00:17:06.914 "copy": true, 00:17:06.914 "nvme_iov_md": false 00:17:06.914 }, 00:17:06.914 "memory_domains": [ 00:17:06.914 { 00:17:06.914 "dma_device_id": "system", 00:17:06.914 "dma_device_type": 1 00:17:06.914 }, 00:17:06.914 { 00:17:06.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.914 "dma_device_type": 2 00:17:06.914 } 00:17:06.914 ], 00:17:06.914 "driver_specific": { 00:17:06.914 "passthru": { 00:17:06.914 "name": "pt1", 00:17:06.914 "base_bdev_name": "malloc1" 00:17:06.914 } 00:17:06.914 } 00:17:06.914 }' 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:06.914 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:07.172 "name": "pt2", 00:17:07.172 "aliases": [ 00:17:07.172 "00000000-0000-0000-0000-000000000002" 00:17:07.172 ], 00:17:07.172 "product_name": "passthru", 00:17:07.172 "block_size": 512, 00:17:07.172 "num_blocks": 65536, 00:17:07.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.172 "assigned_rate_limits": { 00:17:07.172 "rw_ios_per_sec": 0, 00:17:07.172 "rw_mbytes_per_sec": 0, 00:17:07.172 "r_mbytes_per_sec": 0, 00:17:07.172 "w_mbytes_per_sec": 0 00:17:07.172 }, 00:17:07.172 "claimed": true, 00:17:07.172 "claim_type": "exclusive_write", 00:17:07.172 "zoned": false, 00:17:07.172 "supported_io_types": { 00:17:07.172 "read": true, 00:17:07.172 "write": true, 00:17:07.172 "unmap": true, 00:17:07.172 "flush": true, 00:17:07.172 "reset": true, 00:17:07.172 "nvme_admin": false, 00:17:07.172 "nvme_io": false, 00:17:07.172 "nvme_io_md": false, 00:17:07.172 "write_zeroes": true, 00:17:07.172 "zcopy": true, 00:17:07.172 "get_zone_info": false, 00:17:07.172 "zone_management": false, 00:17:07.172 "zone_append": false, 00:17:07.172 "compare": false, 00:17:07.172 "compare_and_write": false, 00:17:07.172 "abort": true, 00:17:07.172 "seek_hole": false, 00:17:07.172 "seek_data": false, 00:17:07.172 "copy": true, 00:17:07.172 "nvme_iov_md": false 00:17:07.172 }, 00:17:07.172 "memory_domains": [ 00:17:07.172 { 00:17:07.172 "dma_device_id": "system", 00:17:07.172 "dma_device_type": 1 00:17:07.172 }, 00:17:07.172 { 00:17:07.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.172 "dma_device_type": 2 00:17:07.172 } 00:17:07.172 ], 00:17:07.172 "driver_specific": { 00:17:07.172 "passthru": { 00:17:07.172 "name": "pt2", 00:17:07.172 "base_bdev_name": "malloc2" 00:17:07.172 } 00:17:07.172 } 00:17:07.172 }' 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:07.172 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:07.431 "name": "pt3", 00:17:07.431 "aliases": [ 00:17:07.431 "00000000-0000-0000-0000-000000000003" 00:17:07.431 ], 00:17:07.431 "product_name": "passthru", 00:17:07.431 "block_size": 512, 00:17:07.431 "num_blocks": 65536, 00:17:07.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.431 "assigned_rate_limits": { 00:17:07.431 "rw_ios_per_sec": 0, 00:17:07.431 "rw_mbytes_per_sec": 0, 00:17:07.431 "r_mbytes_per_sec": 0, 00:17:07.431 "w_mbytes_per_sec": 0 00:17:07.431 }, 00:17:07.431 "claimed": true, 00:17:07.431 "claim_type": "exclusive_write", 00:17:07.431 "zoned": false, 00:17:07.431 "supported_io_types": { 00:17:07.431 "read": true, 00:17:07.431 "write": true, 00:17:07.431 "unmap": true, 00:17:07.431 "flush": true, 00:17:07.431 "reset": true, 00:17:07.431 "nvme_admin": false, 00:17:07.431 "nvme_io": false, 00:17:07.431 "nvme_io_md": false, 00:17:07.431 "write_zeroes": true, 00:17:07.431 "zcopy": true, 00:17:07.431 "get_zone_info": false, 00:17:07.431 "zone_management": false, 00:17:07.431 "zone_append": false, 00:17:07.431 "compare": false, 00:17:07.431 "compare_and_write": false, 00:17:07.431 "abort": true, 00:17:07.431 "seek_hole": false, 00:17:07.431 "seek_data": false, 00:17:07.431 "copy": true, 00:17:07.431 "nvme_iov_md": false 00:17:07.431 }, 00:17:07.431 "memory_domains": [ 00:17:07.431 { 00:17:07.431 "dma_device_id": "system", 00:17:07.431 "dma_device_type": 1 00:17:07.431 }, 00:17:07.431 { 00:17:07.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.431 "dma_device_type": 2 00:17:07.431 } 00:17:07.431 ], 00:17:07.431 "driver_specific": { 00:17:07.431 "passthru": { 00:17:07.431 "name": "pt3", 00:17:07.431 "base_bdev_name": "malloc3" 00:17:07.431 } 00:17:07.431 } 00:17:07.431 }' 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:07.431 18:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:08.001 "name": "pt4", 00:17:08.001 "aliases": [ 00:17:08.001 "00000000-0000-0000-0000-000000000004" 00:17:08.001 ], 00:17:08.001 "product_name": "passthru", 00:17:08.001 "block_size": 512, 00:17:08.001 "num_blocks": 65536, 00:17:08.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.001 "assigned_rate_limits": { 00:17:08.001 "rw_ios_per_sec": 0, 00:17:08.001 "rw_mbytes_per_sec": 0, 00:17:08.001 "r_mbytes_per_sec": 0, 00:17:08.001 "w_mbytes_per_sec": 0 00:17:08.001 }, 00:17:08.001 "claimed": true, 00:17:08.001 "claim_type": "exclusive_write", 00:17:08.001 "zoned": false, 00:17:08.001 "supported_io_types": { 00:17:08.001 "read": true, 00:17:08.001 "write": true, 00:17:08.001 "unmap": true, 00:17:08.001 "flush": true, 00:17:08.001 "reset": true, 00:17:08.001 "nvme_admin": false, 00:17:08.001 "nvme_io": false, 00:17:08.001 "nvme_io_md": false, 00:17:08.001 "write_zeroes": true, 00:17:08.001 "zcopy": true, 00:17:08.001 "get_zone_info": false, 00:17:08.001 "zone_management": false, 00:17:08.001 "zone_append": false, 00:17:08.001 "compare": false, 00:17:08.001 "compare_and_write": false, 00:17:08.001 "abort": true, 00:17:08.001 "seek_hole": false, 00:17:08.001 "seek_data": false, 00:17:08.001 "copy": true, 00:17:08.001 "nvme_iov_md": false 00:17:08.001 }, 00:17:08.001 "memory_domains": [ 00:17:08.001 { 00:17:08.001 "dma_device_id": "system", 00:17:08.001 "dma_device_type": 1 00:17:08.001 }, 00:17:08.001 { 00:17:08.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.001 "dma_device_type": 2 00:17:08.001 } 00:17:08.001 ], 00:17:08.001 "driver_specific": { 00:17:08.001 "passthru": { 00:17:08.001 "name": "pt4", 00:17:08.001 "base_bdev_name": "malloc4" 00:17:08.001 } 00:17:08.001 } 00:17:08.001 }' 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:08.001 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:08.259 [2024-07-15 18:31:00.411540] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.259 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 5c7aee34-42d8-11ef-9ade-d5fc5159efa5 '!=' 5c7aee34-42d8-11ef-9ade-d5fc5159efa5 ']' 00:17:08.259 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:08.259 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:08.259 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:08.259 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:08.517 [2024-07-15 18:31:00.683536] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.517 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.775 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.775 "name": "raid_bdev1", 00:17:08.775 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:08.775 "strip_size_kb": 0, 00:17:08.775 "state": "online", 00:17:08.775 "raid_level": "raid1", 00:17:08.775 "superblock": true, 00:17:08.775 "num_base_bdevs": 4, 00:17:08.775 "num_base_bdevs_discovered": 3, 00:17:08.775 "num_base_bdevs_operational": 3, 00:17:08.775 "base_bdevs_list": [ 00:17:08.775 { 00:17:08.775 "name": null, 00:17:08.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.775 "is_configured": false, 00:17:08.775 "data_offset": 2048, 00:17:08.775 "data_size": 63488 00:17:08.775 }, 00:17:08.775 { 00:17:08.775 "name": "pt2", 00:17:08.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.776 "is_configured": true, 00:17:08.776 "data_offset": 2048, 00:17:08.776 "data_size": 63488 00:17:08.776 }, 00:17:08.776 { 00:17:08.776 "name": "pt3", 00:17:08.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.776 "is_configured": true, 00:17:08.776 "data_offset": 2048, 00:17:08.776 "data_size": 63488 00:17:08.776 }, 00:17:08.776 { 00:17:08.776 "name": "pt4", 00:17:08.776 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.776 "is_configured": true, 00:17:08.776 "data_offset": 2048, 00:17:08.776 "data_size": 63488 00:17:08.776 } 00:17:08.776 ] 00:17:08.776 }' 00:17:08.776 18:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.776 18:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.035 18:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:09.293 [2024-07-15 18:31:01.535566] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.293 [2024-07-15 18:31:01.535598] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.293 [2024-07-15 18:31:01.535626] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.293 [2024-07-15 18:31:01.535653] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.293 [2024-07-15 18:31:01.535658] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e154da34c80 name raid_bdev1, state offline 00:17:09.293 18:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.293 18:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:09.552 18:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:09.552 18:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:09.552 18:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:09.552 18:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:09.552 18:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:09.811 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:09.811 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:09.811 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:10.070 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:10.070 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:10.070 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:10.329 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:10.329 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:10.329 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:10.329 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:10.329 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.587 [2024-07-15 18:31:02.807667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.587 [2024-07-15 18:31:02.807738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.587 [2024-07-15 18:31:02.807751] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da35900 00:17:10.587 [2024-07-15 18:31:02.807760] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.587 [2024-07-15 18:31:02.808526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.587 [2024-07-15 18:31:02.808552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.587 [2024-07-15 18:31:02.808582] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:10.587 [2024-07-15 18:31:02.808596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.587 pt2 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.587 18:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.845 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.846 "name": "raid_bdev1", 00:17:10.846 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:10.846 "strip_size_kb": 0, 00:17:10.846 "state": "configuring", 00:17:10.846 "raid_level": "raid1", 00:17:10.846 "superblock": true, 00:17:10.846 "num_base_bdevs": 4, 00:17:10.846 "num_base_bdevs_discovered": 1, 00:17:10.846 "num_base_bdevs_operational": 3, 00:17:10.846 "base_bdevs_list": [ 00:17:10.846 { 00:17:10.846 "name": null, 00:17:10.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.846 "is_configured": false, 00:17:10.846 "data_offset": 2048, 00:17:10.846 "data_size": 63488 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "name": "pt2", 00:17:10.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.846 "is_configured": true, 00:17:10.846 "data_offset": 2048, 00:17:10.846 "data_size": 63488 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "name": null, 00:17:10.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.846 "is_configured": false, 00:17:10.846 "data_offset": 2048, 00:17:10.846 "data_size": 63488 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "name": null, 00:17:10.846 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:10.846 "is_configured": false, 00:17:10.846 "data_offset": 2048, 00:17:10.846 "data_size": 63488 00:17:10.846 } 00:17:10.846 ] 00:17:10.846 }' 00:17:10.846 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.846 18:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.104 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:17:11.104 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:11.104 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:11.362 [2024-07-15 18:31:03.667717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:11.362 [2024-07-15 18:31:03.667797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.362 [2024-07-15 18:31:03.667811] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da35680 00:17:11.362 [2024-07-15 18:31:03.667820] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.362 [2024-07-15 18:31:03.667977] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.362 [2024-07-15 18:31:03.667994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:11.362 [2024-07-15 18:31:03.668023] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:11.362 [2024-07-15 18:31:03.668033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.362 pt3 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.362 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.620 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.620 "name": "raid_bdev1", 00:17:11.620 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:11.620 "strip_size_kb": 0, 00:17:11.620 "state": "configuring", 00:17:11.620 "raid_level": "raid1", 00:17:11.620 "superblock": true, 00:17:11.620 "num_base_bdevs": 4, 00:17:11.620 "num_base_bdevs_discovered": 2, 00:17:11.620 "num_base_bdevs_operational": 3, 00:17:11.620 "base_bdevs_list": [ 00:17:11.620 { 00:17:11.620 "name": null, 00:17:11.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.620 "is_configured": false, 00:17:11.620 "data_offset": 2048, 00:17:11.620 "data_size": 63488 00:17:11.620 }, 00:17:11.620 { 00:17:11.620 "name": "pt2", 00:17:11.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.620 "is_configured": true, 00:17:11.620 "data_offset": 2048, 00:17:11.620 "data_size": 63488 00:17:11.620 }, 00:17:11.620 { 00:17:11.620 "name": "pt3", 00:17:11.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.620 "is_configured": true, 00:17:11.620 "data_offset": 2048, 00:17:11.620 "data_size": 63488 00:17:11.620 }, 00:17:11.620 { 00:17:11.620 "name": null, 00:17:11.620 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:11.620 "is_configured": false, 00:17:11.620 "data_offset": 2048, 00:17:11.620 "data_size": 63488 00:17:11.620 } 00:17:11.620 ] 00:17:11.620 }' 00:17:11.620 18:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.620 18:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.880 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:17:11.880 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:11.880 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:17:11.880 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:12.139 [2024-07-15 18:31:04.523798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:12.139 [2024-07-15 18:31:04.523871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.139 [2024-07-15 18:31:04.523884] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da34c80 00:17:12.139 [2024-07-15 18:31:04.523893] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.139 [2024-07-15 18:31:04.524025] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.139 [2024-07-15 18:31:04.524036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:12.139 [2024-07-15 18:31:04.524065] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:12.140 [2024-07-15 18:31:04.524075] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:12.140 [2024-07-15 18:31:04.524110] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e154da34780 00:17:12.140 [2024-07-15 18:31:04.524115] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:12.140 [2024-07-15 18:31:04.524136] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e154da97e20 00:17:12.140 [2024-07-15 18:31:04.524185] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e154da34780 00:17:12.140 [2024-07-15 18:31:04.524189] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e154da34780 00:17:12.140 [2024-07-15 18:31:04.524212] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.140 pt4 00:17:12.398 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:12.398 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:12.398 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:12.398 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.399 "name": "raid_bdev1", 00:17:12.399 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:12.399 "strip_size_kb": 0, 00:17:12.399 "state": "online", 00:17:12.399 "raid_level": "raid1", 00:17:12.399 "superblock": true, 00:17:12.399 "num_base_bdevs": 4, 00:17:12.399 "num_base_bdevs_discovered": 3, 00:17:12.399 "num_base_bdevs_operational": 3, 00:17:12.399 "base_bdevs_list": [ 00:17:12.399 { 00:17:12.399 "name": null, 00:17:12.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.399 "is_configured": false, 00:17:12.399 "data_offset": 2048, 00:17:12.399 "data_size": 63488 00:17:12.399 }, 00:17:12.399 { 00:17:12.399 "name": "pt2", 00:17:12.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.399 "is_configured": true, 00:17:12.399 "data_offset": 2048, 00:17:12.399 "data_size": 63488 00:17:12.399 }, 00:17:12.399 { 00:17:12.399 "name": "pt3", 00:17:12.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.399 "is_configured": true, 00:17:12.399 "data_offset": 2048, 00:17:12.399 "data_size": 63488 00:17:12.399 }, 00:17:12.399 { 00:17:12.399 "name": "pt4", 00:17:12.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:12.399 "is_configured": true, 00:17:12.399 "data_offset": 2048, 00:17:12.399 "data_size": 63488 00:17:12.399 } 00:17:12.399 ] 00:17:12.399 }' 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.399 18:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.023 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:13.023 [2024-07-15 18:31:05.407857] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.024 [2024-07-15 18:31:05.407893] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.024 [2024-07-15 18:31:05.407921] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.024 [2024-07-15 18:31:05.407942] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.024 [2024-07-15 18:31:05.407946] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e154da34780 name raid_bdev1, state offline 00:17:13.283 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.283 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:13.541 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:13.541 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:13.541 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:17:13.541 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:17:13.541 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:13.541 18:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.800 [2024-07-15 18:31:06.195922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.800 [2024-07-15 18:31:06.196002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.800 [2024-07-15 18:31:06.196016] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da34c80 00:17:13.800 [2024-07-15 18:31:06.196025] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.800 [2024-07-15 18:31:06.196799] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.800 [2024-07-15 18:31:06.196827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.800 [2024-07-15 18:31:06.196856] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:13.800 [2024-07-15 18:31:06.196869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.800 [2024-07-15 18:31:06.196903] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:13.800 [2024-07-15 18:31:06.196908] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.800 [2024-07-15 18:31:06.196913] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e154da34780 name raid_bdev1, state configuring 00:17:13.800 [2024-07-15 18:31:06.196921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.800 [2024-07-15 18:31:06.196941] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:14.059 pt1 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.059 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.059 "name": "raid_bdev1", 00:17:14.059 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:14.059 "strip_size_kb": 0, 00:17:14.059 "state": "configuring", 00:17:14.059 "raid_level": "raid1", 00:17:14.060 "superblock": true, 00:17:14.060 "num_base_bdevs": 4, 00:17:14.060 "num_base_bdevs_discovered": 2, 00:17:14.060 "num_base_bdevs_operational": 3, 00:17:14.060 "base_bdevs_list": [ 00:17:14.060 { 00:17:14.060 "name": null, 00:17:14.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.060 "is_configured": false, 00:17:14.060 "data_offset": 2048, 00:17:14.060 "data_size": 63488 00:17:14.060 }, 00:17:14.060 { 00:17:14.060 "name": "pt2", 00:17:14.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.060 "is_configured": true, 00:17:14.060 "data_offset": 2048, 00:17:14.060 "data_size": 63488 00:17:14.060 }, 00:17:14.060 { 00:17:14.060 "name": "pt3", 00:17:14.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.060 "is_configured": true, 00:17:14.060 "data_offset": 2048, 00:17:14.060 "data_size": 63488 00:17:14.060 }, 00:17:14.060 { 00:17:14.060 "name": null, 00:17:14.060 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.060 "is_configured": false, 00:17:14.060 "data_offset": 2048, 00:17:14.060 "data_size": 63488 00:17:14.060 } 00:17:14.060 ] 00:17:14.060 }' 00:17:14.060 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.060 18:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.628 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:17:14.628 18:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:14.886 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:17:14.886 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:15.144 [2024-07-15 18:31:07.300000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:15.144 [2024-07-15 18:31:07.300068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.144 [2024-07-15 18:31:07.300081] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e154da35180 00:17:15.144 [2024-07-15 18:31:07.300090] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.144 [2024-07-15 18:31:07.300230] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.144 [2024-07-15 18:31:07.300242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:15.144 [2024-07-15 18:31:07.300269] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:15.144 [2024-07-15 18:31:07.300279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:15.144 [2024-07-15 18:31:07.300315] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e154da34780 00:17:15.144 [2024-07-15 18:31:07.300319] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:15.144 [2024-07-15 18:31:07.300341] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e154da97e20 00:17:15.144 [2024-07-15 18:31:07.300394] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e154da34780 00:17:15.144 [2024-07-15 18:31:07.300398] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e154da34780 00:17:15.144 [2024-07-15 18:31:07.300420] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.144 pt4 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.144 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.405 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.405 "name": "raid_bdev1", 00:17:15.405 "uuid": "5c7aee34-42d8-11ef-9ade-d5fc5159efa5", 00:17:15.405 "strip_size_kb": 0, 00:17:15.405 "state": "online", 00:17:15.405 "raid_level": "raid1", 00:17:15.405 "superblock": true, 00:17:15.405 "num_base_bdevs": 4, 00:17:15.405 "num_base_bdevs_discovered": 3, 00:17:15.405 "num_base_bdevs_operational": 3, 00:17:15.405 "base_bdevs_list": [ 00:17:15.405 { 00:17:15.405 "name": null, 00:17:15.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.405 "is_configured": false, 00:17:15.405 "data_offset": 2048, 00:17:15.405 "data_size": 63488 00:17:15.405 }, 00:17:15.405 { 00:17:15.405 "name": "pt2", 00:17:15.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.405 "is_configured": true, 00:17:15.405 "data_offset": 2048, 00:17:15.405 "data_size": 63488 00:17:15.405 }, 00:17:15.405 { 00:17:15.405 "name": "pt3", 00:17:15.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.405 "is_configured": true, 00:17:15.405 "data_offset": 2048, 00:17:15.405 "data_size": 63488 00:17:15.405 }, 00:17:15.405 { 00:17:15.405 "name": "pt4", 00:17:15.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.405 "is_configured": true, 00:17:15.405 "data_offset": 2048, 00:17:15.405 "data_size": 63488 00:17:15.405 } 00:17:15.405 ] 00:17:15.405 }' 00:17:15.405 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.405 18:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.663 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:15.663 18:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:15.922 18:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:15.922 18:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:15.922 18:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:16.181 [2024-07-15 18:31:08.336126] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 5c7aee34-42d8-11ef-9ade-d5fc5159efa5 '!=' 5c7aee34-42d8-11ef-9ade-d5fc5159efa5 ']' 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 64696 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 64696 ']' 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 64696 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 64696 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:16.181 killing process with pid 64696 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64696' 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 64696 00:17:16.181 [2024-07-15 18:31:08.363766] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.181 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 64696 00:17:16.181 [2024-07-15 18:31:08.363805] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.181 [2024-07-15 18:31:08.363828] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.181 [2024-07-15 18:31:08.363833] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e154da34780 name raid_bdev1, state offline 00:17:16.181 [2024-07-15 18:31:08.391666] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.440 18:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:16.440 00:17:16.440 real 0m21.553s 00:17:16.440 user 0m38.984s 00:17:16.440 sys 0m3.268s 00:17:16.440 ************************************ 00:17:16.440 END TEST raid_superblock_test 00:17:16.440 ************************************ 00:17:16.440 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.440 18:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.440 18:31:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:16.440 18:31:08 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:17:16.440 18:31:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:16.440 18:31:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.440 18:31:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.440 ************************************ 00:17:16.440 START TEST raid_read_error_test 00:17:16.440 ************************************ 00:17:16.440 18:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:17:16.440 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:16.440 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:16.440 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.AsiwXSYcmK 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65332 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65332 /var/tmp/spdk-raid.sock 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 65332 ']' 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.441 18:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 [2024-07-15 18:31:08.655771] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:17:16.441 [2024-07-15 18:31:08.655947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:17.007 EAL: TSC is not safe to use in SMP mode 00:17:17.007 EAL: TSC is not invariant 00:17:17.007 [2024-07-15 18:31:09.267657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.007 [2024-07-15 18:31:09.371005] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:17.007 [2024-07-15 18:31:09.373186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.007 [2024-07-15 18:31:09.373949] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.008 [2024-07-15 18:31:09.373960] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.266 18:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.266 18:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:17.266 18:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:17.266 18:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:17.833 BaseBdev1_malloc 00:17:17.834 18:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:18.093 true 00:17:18.093 18:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:18.351 [2024-07-15 18:31:10.526173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:18.351 [2024-07-15 18:31:10.526239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.351 [2024-07-15 18:31:10.526267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11b4fe834780 00:17:18.351 [2024-07-15 18:31:10.526276] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.351 [2024-07-15 18:31:10.526972] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.351 [2024-07-15 18:31:10.526996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.351 BaseBdev1 00:17:18.351 18:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:18.351 18:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:18.610 BaseBdev2_malloc 00:17:18.610 18:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:18.869 true 00:17:18.869 18:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:19.127 [2024-07-15 18:31:11.310215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:19.127 [2024-07-15 18:31:11.310282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.128 [2024-07-15 18:31:11.310324] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11b4fe834c80 00:17:19.128 [2024-07-15 18:31:11.310333] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.128 [2024-07-15 18:31:11.311025] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.128 [2024-07-15 18:31:11.311059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:19.128 BaseBdev2 00:17:19.128 18:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:19.128 18:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:19.385 BaseBdev3_malloc 00:17:19.385 18:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:19.643 true 00:17:19.643 18:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:19.902 [2024-07-15 18:31:12.126306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:19.902 [2024-07-15 18:31:12.126360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.902 [2024-07-15 18:31:12.126384] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11b4fe835180 00:17:19.902 [2024-07-15 18:31:12.126393] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.902 [2024-07-15 18:31:12.127079] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.902 [2024-07-15 18:31:12.127115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:19.902 BaseBdev3 00:17:19.902 18:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:19.902 18:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:20.162 BaseBdev4_malloc 00:17:20.162 18:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:20.421 true 00:17:20.421 18:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:20.680 [2024-07-15 18:31:12.842362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:20.680 [2024-07-15 18:31:12.842418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.680 [2024-07-15 18:31:12.842443] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11b4fe835680 00:17:20.680 [2024-07-15 18:31:12.842452] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.680 [2024-07-15 18:31:12.843160] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.680 [2024-07-15 18:31:12.843197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:20.680 BaseBdev4 00:17:20.680 18:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:20.680 [2024-07-15 18:31:13.070389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.680 [2024-07-15 18:31:13.070999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.680 [2024-07-15 18:31:13.071018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.680 [2024-07-15 18:31:13.071032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:20.680 [2024-07-15 18:31:13.071100] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11b4fe835900 00:17:20.680 [2024-07-15 18:31:13.071106] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:20.680 [2024-07-15 18:31:13.071153] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11b4fe8a0e20 00:17:20.680 [2024-07-15 18:31:13.071254] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11b4fe835900 00:17:20.680 [2024-07-15 18:31:13.071259] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11b4fe835900 00:17:20.680 [2024-07-15 18:31:13.071286] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.938 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.209 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.209 "name": "raid_bdev1", 00:17:21.209 "uuid": "6a02abba-42d8-11ef-9ade-d5fc5159efa5", 00:17:21.209 "strip_size_kb": 0, 00:17:21.209 "state": "online", 00:17:21.209 "raid_level": "raid1", 00:17:21.209 "superblock": true, 00:17:21.209 "num_base_bdevs": 4, 00:17:21.209 "num_base_bdevs_discovered": 4, 00:17:21.209 "num_base_bdevs_operational": 4, 00:17:21.209 "base_bdevs_list": [ 00:17:21.209 { 00:17:21.209 "name": "BaseBdev1", 00:17:21.209 "uuid": "f136dfa1-5746-0355-bf5c-ab182d9465ad", 00:17:21.209 "is_configured": true, 00:17:21.209 "data_offset": 2048, 00:17:21.209 "data_size": 63488 00:17:21.209 }, 00:17:21.209 { 00:17:21.209 "name": "BaseBdev2", 00:17:21.209 "uuid": "c8605da9-8ccd-3c58-8e57-96953194db2d", 00:17:21.209 "is_configured": true, 00:17:21.209 "data_offset": 2048, 00:17:21.209 "data_size": 63488 00:17:21.209 }, 00:17:21.209 { 00:17:21.209 "name": "BaseBdev3", 00:17:21.209 "uuid": "929a3fd8-2c5a-0450-a9d0-f605f43eb16b", 00:17:21.209 "is_configured": true, 00:17:21.209 "data_offset": 2048, 00:17:21.209 "data_size": 63488 00:17:21.209 }, 00:17:21.209 { 00:17:21.209 "name": "BaseBdev4", 00:17:21.209 "uuid": "e4889960-cac9-7050-ae70-f88f01140160", 00:17:21.209 "is_configured": true, 00:17:21.209 "data_offset": 2048, 00:17:21.209 "data_size": 63488 00:17:21.209 } 00:17:21.209 ] 00:17:21.209 }' 00:17:21.209 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.209 18:31:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.466 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:21.466 18:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:21.466 [2024-07-15 18:31:13.822650] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11b4fe8a0ec0 00:17:22.402 18:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.660 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.919 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.919 "name": "raid_bdev1", 00:17:22.919 "uuid": "6a02abba-42d8-11ef-9ade-d5fc5159efa5", 00:17:22.919 "strip_size_kb": 0, 00:17:22.919 "state": "online", 00:17:22.919 "raid_level": "raid1", 00:17:22.919 "superblock": true, 00:17:22.919 "num_base_bdevs": 4, 00:17:22.919 "num_base_bdevs_discovered": 4, 00:17:22.919 "num_base_bdevs_operational": 4, 00:17:22.919 "base_bdevs_list": [ 00:17:22.919 { 00:17:22.919 "name": "BaseBdev1", 00:17:22.919 "uuid": "f136dfa1-5746-0355-bf5c-ab182d9465ad", 00:17:22.919 "is_configured": true, 00:17:22.919 "data_offset": 2048, 00:17:22.919 "data_size": 63488 00:17:22.919 }, 00:17:22.919 { 00:17:22.919 "name": "BaseBdev2", 00:17:22.919 "uuid": "c8605da9-8ccd-3c58-8e57-96953194db2d", 00:17:22.919 "is_configured": true, 00:17:22.919 "data_offset": 2048, 00:17:22.919 "data_size": 63488 00:17:22.919 }, 00:17:22.919 { 00:17:22.919 "name": "BaseBdev3", 00:17:22.919 "uuid": "929a3fd8-2c5a-0450-a9d0-f605f43eb16b", 00:17:22.919 "is_configured": true, 00:17:22.919 "data_offset": 2048, 00:17:22.919 "data_size": 63488 00:17:22.919 }, 00:17:22.919 { 00:17:22.919 "name": "BaseBdev4", 00:17:22.919 "uuid": "e4889960-cac9-7050-ae70-f88f01140160", 00:17:22.919 "is_configured": true, 00:17:22.919 "data_offset": 2048, 00:17:22.919 "data_size": 63488 00:17:22.919 } 00:17:22.919 ] 00:17:22.919 }' 00:17:22.919 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.919 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.178 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:23.437 [2024-07-15 18:31:15.795006] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.437 [2024-07-15 18:31:15.795036] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.437 [2024-07-15 18:31:15.795448] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.437 [2024-07-15 18:31:15.795459] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.437 [2024-07-15 18:31:15.795478] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.437 [2024-07-15 18:31:15.795483] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11b4fe835900 name raid_bdev1, state offline 00:17:23.437 0 00:17:23.437 18:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65332 00:17:23.437 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 65332 ']' 00:17:23.437 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 65332 00:17:23.437 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:23.437 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:23.437 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:17:23.437 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65332 00:17:23.438 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:17:23.438 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:17:23.438 killing process with pid 65332 00:17:23.438 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65332' 00:17:23.438 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 65332 00:17:23.438 [2024-07-15 18:31:15.823762] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:23.438 18:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 65332 00:17:23.697 [2024-07-15 18:31:15.851192] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.697 18:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.AsiwXSYcmK 00:17:23.697 18:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:23.697 18:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:23.697 18:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:23.697 18:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:23.697 18:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:23.697 18:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:23.697 18:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:23.697 00:17:23.697 real 0m7.427s 00:17:23.698 user 0m11.730s 00:17:23.698 sys 0m1.315s 00:17:23.698 18:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:23.698 18:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.698 ************************************ 00:17:23.698 END TEST raid_read_error_test 00:17:23.698 ************************************ 00:17:23.957 18:31:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:23.957 18:31:16 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:17:23.957 18:31:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:23.957 18:31:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.957 18:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.957 ************************************ 00:17:23.957 START TEST raid_write_error_test 00:17:23.957 ************************************ 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.VQo1WnvfUZ 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65470 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65470 /var/tmp/spdk-raid.sock 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 65470 ']' 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:23.957 18:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:23.958 18:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:23.958 18:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:23.958 18:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.958 18:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.958 [2024-07-15 18:31:16.137309] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:17:23.958 [2024-07-15 18:31:16.137480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:24.525 EAL: TSC is not safe to use in SMP mode 00:17:24.525 EAL: TSC is not invariant 00:17:24.525 [2024-07-15 18:31:16.750410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.525 [2024-07-15 18:31:16.857074] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:24.525 [2024-07-15 18:31:16.859142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.525 [2024-07-15 18:31:16.859933] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.525 [2024-07-15 18:31:16.859948] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.092 18:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.092 18:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:25.092 18:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:25.092 18:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:25.092 BaseBdev1_malloc 00:17:25.092 18:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:25.351 true 00:17:25.351 18:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:25.609 [2024-07-15 18:31:17.931667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:25.609 [2024-07-15 18:31:17.931733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.609 [2024-07-15 18:31:17.931763] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2bcca34780 00:17:25.609 [2024-07-15 18:31:17.931774] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.609 [2024-07-15 18:31:17.932444] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.609 [2024-07-15 18:31:17.932471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.609 BaseBdev1 00:17:25.609 18:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:25.610 18:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:25.867 BaseBdev2_malloc 00:17:25.867 18:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:26.124 true 00:17:26.125 18:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:26.382 [2024-07-15 18:31:18.707744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:26.382 [2024-07-15 18:31:18.707814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.382 [2024-07-15 18:31:18.707845] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2bcca34c80 00:17:26.382 [2024-07-15 18:31:18.707855] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.382 [2024-07-15 18:31:18.708531] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.382 [2024-07-15 18:31:18.708558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:26.382 BaseBdev2 00:17:26.382 18:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:26.382 18:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:26.640 BaseBdev3_malloc 00:17:26.640 18:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:26.899 true 00:17:26.899 18:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:27.157 [2024-07-15 18:31:19.451806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:27.157 [2024-07-15 18:31:19.451899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.157 [2024-07-15 18:31:19.451945] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2bcca35180 00:17:27.157 [2024-07-15 18:31:19.451964] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.157 [2024-07-15 18:31:19.452732] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.157 [2024-07-15 18:31:19.452808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:27.157 BaseBdev3 00:17:27.157 18:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:27.157 18:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:27.414 BaseBdev4_malloc 00:17:27.414 18:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:27.672 true 00:17:27.672 18:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:27.929 [2024-07-15 18:31:20.195826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:27.929 [2024-07-15 18:31:20.195882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.929 [2024-07-15 18:31:20.195911] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2bcca35680 00:17:27.929 [2024-07-15 18:31:20.195921] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.929 [2024-07-15 18:31:20.196568] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.929 [2024-07-15 18:31:20.196595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:27.929 BaseBdev4 00:17:27.929 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:28.188 [2024-07-15 18:31:20.495857] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.188 [2024-07-15 18:31:20.496435] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.188 [2024-07-15 18:31:20.496461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.188 [2024-07-15 18:31:20.496477] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:28.188 [2024-07-15 18:31:20.496546] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b2bcca35900 00:17:28.188 [2024-07-15 18:31:20.496553] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:28.188 [2024-07-15 18:31:20.496590] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b2bccaa0e20 00:17:28.188 [2024-07-15 18:31:20.496672] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b2bcca35900 00:17:28.188 [2024-07-15 18:31:20.496677] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b2bcca35900 00:17:28.188 [2024-07-15 18:31:20.496706] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.188 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.446 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.446 "name": "raid_bdev1", 00:17:28.446 "uuid": "6e6fb50a-42d8-11ef-9ade-d5fc5159efa5", 00:17:28.446 "strip_size_kb": 0, 00:17:28.446 "state": "online", 00:17:28.446 "raid_level": "raid1", 00:17:28.446 "superblock": true, 00:17:28.446 "num_base_bdevs": 4, 00:17:28.446 "num_base_bdevs_discovered": 4, 00:17:28.446 "num_base_bdevs_operational": 4, 00:17:28.446 "base_bdevs_list": [ 00:17:28.446 { 00:17:28.446 "name": "BaseBdev1", 00:17:28.446 "uuid": "d781175b-6567-9152-91af-85bd244ef6c3", 00:17:28.446 "is_configured": true, 00:17:28.446 "data_offset": 2048, 00:17:28.446 "data_size": 63488 00:17:28.446 }, 00:17:28.446 { 00:17:28.446 "name": "BaseBdev2", 00:17:28.446 "uuid": "72456444-5b13-5252-8c9c-e5fdb0fac353", 00:17:28.446 "is_configured": true, 00:17:28.446 "data_offset": 2048, 00:17:28.446 "data_size": 63488 00:17:28.446 }, 00:17:28.446 { 00:17:28.446 "name": "BaseBdev3", 00:17:28.446 "uuid": "68b246ce-3b57-e253-880a-b31070f42517", 00:17:28.446 "is_configured": true, 00:17:28.446 "data_offset": 2048, 00:17:28.446 "data_size": 63488 00:17:28.446 }, 00:17:28.446 { 00:17:28.446 "name": "BaseBdev4", 00:17:28.446 "uuid": "7f1ac73d-10f8-d054-99b5-a31c952d272c", 00:17:28.446 "is_configured": true, 00:17:28.446 "data_offset": 2048, 00:17:28.446 "data_size": 63488 00:17:28.446 } 00:17:28.446 ] 00:17:28.446 }' 00:17:28.446 18:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.446 18:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.704 18:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:28.704 18:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:28.962 [2024-07-15 18:31:21.216122] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b2bccaa0ec0 00:17:29.897 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:30.156 [2024-07-15 18:31:22.411174] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:30.156 [2024-07-15 18:31:22.411228] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:30.156 [2024-07-15 18:31:22.411361] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1b2bccaa0ec0 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.156 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.449 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.449 "name": "raid_bdev1", 00:17:30.449 "uuid": "6e6fb50a-42d8-11ef-9ade-d5fc5159efa5", 00:17:30.449 "strip_size_kb": 0, 00:17:30.449 "state": "online", 00:17:30.449 "raid_level": "raid1", 00:17:30.449 "superblock": true, 00:17:30.449 "num_base_bdevs": 4, 00:17:30.449 "num_base_bdevs_discovered": 3, 00:17:30.449 "num_base_bdevs_operational": 3, 00:17:30.449 "base_bdevs_list": [ 00:17:30.449 { 00:17:30.449 "name": null, 00:17:30.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.449 "is_configured": false, 00:17:30.449 "data_offset": 2048, 00:17:30.449 "data_size": 63488 00:17:30.449 }, 00:17:30.449 { 00:17:30.449 "name": "BaseBdev2", 00:17:30.449 "uuid": "72456444-5b13-5252-8c9c-e5fdb0fac353", 00:17:30.449 "is_configured": true, 00:17:30.449 "data_offset": 2048, 00:17:30.449 "data_size": 63488 00:17:30.449 }, 00:17:30.449 { 00:17:30.449 "name": "BaseBdev3", 00:17:30.449 "uuid": "68b246ce-3b57-e253-880a-b31070f42517", 00:17:30.449 "is_configured": true, 00:17:30.449 "data_offset": 2048, 00:17:30.449 "data_size": 63488 00:17:30.449 }, 00:17:30.449 { 00:17:30.449 "name": "BaseBdev4", 00:17:30.449 "uuid": "7f1ac73d-10f8-d054-99b5-a31c952d272c", 00:17:30.449 "is_configured": true, 00:17:30.449 "data_offset": 2048, 00:17:30.449 "data_size": 63488 00:17:30.449 } 00:17:30.449 ] 00:17:30.449 }' 00:17:30.449 18:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.449 18:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.708 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:30.969 [2024-07-15 18:31:23.222000] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.969 [2024-07-15 18:31:23.222028] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.969 [2024-07-15 18:31:23.222360] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.969 [2024-07-15 18:31:23.222371] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.969 [2024-07-15 18:31:23.222388] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.969 [2024-07-15 18:31:23.222393] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b2bcca35900 name raid_bdev1, state offline 00:17:30.969 0 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65470 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 65470 ']' 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 65470 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65470 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:17:30.969 killing process with pid 65470 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65470' 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 65470 00:17:30.969 [2024-07-15 18:31:23.252378] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.969 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 65470 00:17:30.969 [2024-07-15 18:31:23.279540] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.VQo1WnvfUZ 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:31.228 00:17:31.228 real 0m7.380s 00:17:31.228 user 0m11.749s 00:17:31.228 sys 0m1.206s 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:31.228 ************************************ 00:17:31.228 END TEST raid_write_error_test 00:17:31.228 ************************************ 00:17:31.228 18:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.228 18:31:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:31.228 18:31:23 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:17:31.228 18:31:23 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:17:31.228 18:31:23 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:17:31.228 18:31:23 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:31.228 18:31:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:31.228 18:31:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:31.228 18:31:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.228 ************************************ 00:17:31.228 START TEST raid_state_function_test_sb_4k 00:17:31.228 ************************************ 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=65606 00:17:31.228 Process raid pid: 65606 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65606' 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 65606 /var/tmp/spdk-raid.sock 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 65606 ']' 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.228 18:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.228 [2024-07-15 18:31:23.551608] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:17:31.228 [2024-07-15 18:31:23.551774] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:31.795 EAL: TSC is not safe to use in SMP mode 00:17:31.795 EAL: TSC is not invariant 00:17:31.796 [2024-07-15 18:31:24.148673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.052 [2024-07-15 18:31:24.255126] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:32.052 [2024-07-15 18:31:24.257211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.052 [2024-07-15 18:31:24.257981] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.052 [2024-07-15 18:31:24.257994] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.309 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.309 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:17:32.309 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:32.567 [2024-07-15 18:31:24.794152] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.567 [2024-07-15 18:31:24.794220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.567 [2024-07-15 18:31:24.794225] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.567 [2024-07-15 18:31:24.794234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.567 18:31:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.824 18:31:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.824 "name": "Existed_Raid", 00:17:32.824 "uuid": "70ff934e-42d8-11ef-9ade-d5fc5159efa5", 00:17:32.824 "strip_size_kb": 0, 00:17:32.824 "state": "configuring", 00:17:32.824 "raid_level": "raid1", 00:17:32.824 "superblock": true, 00:17:32.824 "num_base_bdevs": 2, 00:17:32.824 "num_base_bdevs_discovered": 0, 00:17:32.824 "num_base_bdevs_operational": 2, 00:17:32.824 "base_bdevs_list": [ 00:17:32.824 { 00:17:32.824 "name": "BaseBdev1", 00:17:32.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.824 "is_configured": false, 00:17:32.824 "data_offset": 0, 00:17:32.824 "data_size": 0 00:17:32.824 }, 00:17:32.824 { 00:17:32.824 "name": "BaseBdev2", 00:17:32.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.824 "is_configured": false, 00:17:32.824 "data_offset": 0, 00:17:32.824 "data_size": 0 00:17:32.824 } 00:17:32.824 ] 00:17:32.824 }' 00:17:32.824 18:31:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.824 18:31:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.082 18:31:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:33.340 [2024-07-15 18:31:25.670166] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.340 [2024-07-15 18:31:25.670192] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3231dc34500 name Existed_Raid, state configuring 00:17:33.340 18:31:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:33.611 [2024-07-15 18:31:25.938196] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.611 [2024-07-15 18:31:25.938244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.611 [2024-07-15 18:31:25.938249] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.611 [2024-07-15 18:31:25.938258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.611 18:31:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:17:33.871 [2024-07-15 18:31:26.219312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.871 BaseBdev1 00:17:33.871 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:33.871 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:33.871 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:33.871 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:17:33.871 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:33.871 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:33.871 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.129 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.388 [ 00:17:34.388 { 00:17:34.388 "name": "BaseBdev1", 00:17:34.388 "aliases": [ 00:17:34.388 "71d8df23-42d8-11ef-9ade-d5fc5159efa5" 00:17:34.388 ], 00:17:34.388 "product_name": "Malloc disk", 00:17:34.388 "block_size": 4096, 00:17:34.388 "num_blocks": 8192, 00:17:34.388 "uuid": "71d8df23-42d8-11ef-9ade-d5fc5159efa5", 00:17:34.388 "assigned_rate_limits": { 00:17:34.388 "rw_ios_per_sec": 0, 00:17:34.388 "rw_mbytes_per_sec": 0, 00:17:34.388 "r_mbytes_per_sec": 0, 00:17:34.388 "w_mbytes_per_sec": 0 00:17:34.388 }, 00:17:34.388 "claimed": true, 00:17:34.388 "claim_type": "exclusive_write", 00:17:34.388 "zoned": false, 00:17:34.388 "supported_io_types": { 00:17:34.388 "read": true, 00:17:34.388 "write": true, 00:17:34.388 "unmap": true, 00:17:34.388 "flush": true, 00:17:34.388 "reset": true, 00:17:34.388 "nvme_admin": false, 00:17:34.388 "nvme_io": false, 00:17:34.388 "nvme_io_md": false, 00:17:34.388 "write_zeroes": true, 00:17:34.388 "zcopy": true, 00:17:34.388 "get_zone_info": false, 00:17:34.388 "zone_management": false, 00:17:34.388 "zone_append": false, 00:17:34.388 "compare": false, 00:17:34.388 "compare_and_write": false, 00:17:34.388 "abort": true, 00:17:34.388 "seek_hole": false, 00:17:34.388 "seek_data": false, 00:17:34.388 "copy": true, 00:17:34.388 "nvme_iov_md": false 00:17:34.388 }, 00:17:34.388 "memory_domains": [ 00:17:34.388 { 00:17:34.388 "dma_device_id": "system", 00:17:34.388 "dma_device_type": 1 00:17:34.388 }, 00:17:34.388 { 00:17:34.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.388 "dma_device_type": 2 00:17:34.388 } 00:17:34.388 ], 00:17:34.388 "driver_specific": {} 00:17:34.388 } 00:17:34.388 ] 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.388 18:31:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.646 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.646 "name": "Existed_Raid", 00:17:34.646 "uuid": "71ae249a-42d8-11ef-9ade-d5fc5159efa5", 00:17:34.646 "strip_size_kb": 0, 00:17:34.646 "state": "configuring", 00:17:34.646 "raid_level": "raid1", 00:17:34.646 "superblock": true, 00:17:34.646 "num_base_bdevs": 2, 00:17:34.646 "num_base_bdevs_discovered": 1, 00:17:34.646 "num_base_bdevs_operational": 2, 00:17:34.646 "base_bdevs_list": [ 00:17:34.646 { 00:17:34.646 "name": "BaseBdev1", 00:17:34.646 "uuid": "71d8df23-42d8-11ef-9ade-d5fc5159efa5", 00:17:34.646 "is_configured": true, 00:17:34.646 "data_offset": 256, 00:17:34.646 "data_size": 7936 00:17:34.646 }, 00:17:34.646 { 00:17:34.647 "name": "BaseBdev2", 00:17:34.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.647 "is_configured": false, 00:17:34.647 "data_offset": 0, 00:17:34.647 "data_size": 0 00:17:34.647 } 00:17:34.647 ] 00:17:34.647 }' 00:17:34.647 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.647 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.212 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:35.212 [2024-07-15 18:31:27.570305] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.212 [2024-07-15 18:31:27.570338] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3231dc34500 name Existed_Raid, state configuring 00:17:35.212 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:35.470 [2024-07-15 18:31:27.846352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.470 [2024-07-15 18:31:27.847218] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.470 [2024-07-15 18:31:27.847261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.470 18:31:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.728 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.728 "name": "Existed_Raid", 00:17:35.728 "uuid": "72d14dc3-42d8-11ef-9ade-d5fc5159efa5", 00:17:35.728 "strip_size_kb": 0, 00:17:35.728 "state": "configuring", 00:17:35.728 "raid_level": "raid1", 00:17:35.728 "superblock": true, 00:17:35.728 "num_base_bdevs": 2, 00:17:35.728 "num_base_bdevs_discovered": 1, 00:17:35.728 "num_base_bdevs_operational": 2, 00:17:35.728 "base_bdevs_list": [ 00:17:35.728 { 00:17:35.728 "name": "BaseBdev1", 00:17:35.728 "uuid": "71d8df23-42d8-11ef-9ade-d5fc5159efa5", 00:17:35.728 "is_configured": true, 00:17:35.728 "data_offset": 256, 00:17:35.728 "data_size": 7936 00:17:35.728 }, 00:17:35.728 { 00:17:35.728 "name": "BaseBdev2", 00:17:35.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.728 "is_configured": false, 00:17:35.728 "data_offset": 0, 00:17:35.728 "data_size": 0 00:17:35.728 } 00:17:35.728 ] 00:17:35.728 }' 00:17:35.728 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.728 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.294 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:17:36.294 [2024-07-15 18:31:28.674759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.294 [2024-07-15 18:31:28.674888] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3231dc34a00 00:17:36.294 [2024-07-15 18:31:28.674901] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.294 [2024-07-15 18:31:28.674938] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3231dc97e20 00:17:36.294 [2024-07-15 18:31:28.675036] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3231dc34a00 00:17:36.294 [2024-07-15 18:31:28.675046] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3231dc34a00 00:17:36.294 [2024-07-15 18:31:28.675088] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.294 BaseBdev2 00:17:36.294 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:36.294 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:36.294 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:36.294 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:17:36.294 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:36.294 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:36.294 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.552 18:31:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.810 [ 00:17:36.810 { 00:17:36.810 "name": "BaseBdev2", 00:17:36.810 "aliases": [ 00:17:36.810 "734fab2c-42d8-11ef-9ade-d5fc5159efa5" 00:17:36.810 ], 00:17:36.810 "product_name": "Malloc disk", 00:17:36.810 "block_size": 4096, 00:17:36.810 "num_blocks": 8192, 00:17:36.810 "uuid": "734fab2c-42d8-11ef-9ade-d5fc5159efa5", 00:17:36.810 "assigned_rate_limits": { 00:17:36.810 "rw_ios_per_sec": 0, 00:17:36.810 "rw_mbytes_per_sec": 0, 00:17:36.810 "r_mbytes_per_sec": 0, 00:17:36.810 "w_mbytes_per_sec": 0 00:17:36.810 }, 00:17:36.810 "claimed": true, 00:17:36.810 "claim_type": "exclusive_write", 00:17:36.810 "zoned": false, 00:17:36.810 "supported_io_types": { 00:17:36.810 "read": true, 00:17:36.810 "write": true, 00:17:36.810 "unmap": true, 00:17:36.810 "flush": true, 00:17:36.810 "reset": true, 00:17:36.810 "nvme_admin": false, 00:17:36.810 "nvme_io": false, 00:17:36.810 "nvme_io_md": false, 00:17:36.810 "write_zeroes": true, 00:17:36.810 "zcopy": true, 00:17:36.810 "get_zone_info": false, 00:17:36.810 "zone_management": false, 00:17:36.810 "zone_append": false, 00:17:36.810 "compare": false, 00:17:36.810 "compare_and_write": false, 00:17:36.810 "abort": true, 00:17:36.810 "seek_hole": false, 00:17:36.810 "seek_data": false, 00:17:36.810 "copy": true, 00:17:36.810 "nvme_iov_md": false 00:17:36.810 }, 00:17:36.810 "memory_domains": [ 00:17:36.810 { 00:17:36.810 "dma_device_id": "system", 00:17:36.810 "dma_device_type": 1 00:17:36.810 }, 00:17:36.810 { 00:17:36.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.810 "dma_device_type": 2 00:17:36.810 } 00:17:36.810 ], 00:17:36.810 "driver_specific": {} 00:17:36.810 } 00:17:36.810 ] 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.810 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.068 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.068 "name": "Existed_Raid", 00:17:37.068 "uuid": "72d14dc3-42d8-11ef-9ade-d5fc5159efa5", 00:17:37.068 "strip_size_kb": 0, 00:17:37.068 "state": "online", 00:17:37.068 "raid_level": "raid1", 00:17:37.068 "superblock": true, 00:17:37.068 "num_base_bdevs": 2, 00:17:37.068 "num_base_bdevs_discovered": 2, 00:17:37.068 "num_base_bdevs_operational": 2, 00:17:37.068 "base_bdevs_list": [ 00:17:37.068 { 00:17:37.068 "name": "BaseBdev1", 00:17:37.068 "uuid": "71d8df23-42d8-11ef-9ade-d5fc5159efa5", 00:17:37.068 "is_configured": true, 00:17:37.068 "data_offset": 256, 00:17:37.068 "data_size": 7936 00:17:37.068 }, 00:17:37.068 { 00:17:37.068 "name": "BaseBdev2", 00:17:37.068 "uuid": "734fab2c-42d8-11ef-9ade-d5fc5159efa5", 00:17:37.068 "is_configured": true, 00:17:37.068 "data_offset": 256, 00:17:37.068 "data_size": 7936 00:17:37.068 } 00:17:37.068 ] 00:17:37.068 }' 00:17:37.068 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.068 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.635 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:37.635 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:37.635 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:37.635 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:37.635 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:37.635 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:37.635 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:37.635 18:31:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:37.635 [2024-07-15 18:31:29.982596] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.635 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:37.635 "name": "Existed_Raid", 00:17:37.635 "aliases": [ 00:17:37.635 "72d14dc3-42d8-11ef-9ade-d5fc5159efa5" 00:17:37.635 ], 00:17:37.635 "product_name": "Raid Volume", 00:17:37.635 "block_size": 4096, 00:17:37.635 "num_blocks": 7936, 00:17:37.635 "uuid": "72d14dc3-42d8-11ef-9ade-d5fc5159efa5", 00:17:37.635 "assigned_rate_limits": { 00:17:37.635 "rw_ios_per_sec": 0, 00:17:37.635 "rw_mbytes_per_sec": 0, 00:17:37.635 "r_mbytes_per_sec": 0, 00:17:37.635 "w_mbytes_per_sec": 0 00:17:37.635 }, 00:17:37.635 "claimed": false, 00:17:37.635 "zoned": false, 00:17:37.635 "supported_io_types": { 00:17:37.635 "read": true, 00:17:37.635 "write": true, 00:17:37.635 "unmap": false, 00:17:37.635 "flush": false, 00:17:37.635 "reset": true, 00:17:37.635 "nvme_admin": false, 00:17:37.635 "nvme_io": false, 00:17:37.635 "nvme_io_md": false, 00:17:37.635 "write_zeroes": true, 00:17:37.635 "zcopy": false, 00:17:37.635 "get_zone_info": false, 00:17:37.635 "zone_management": false, 00:17:37.635 "zone_append": false, 00:17:37.635 "compare": false, 00:17:37.635 "compare_and_write": false, 00:17:37.635 "abort": false, 00:17:37.635 "seek_hole": false, 00:17:37.635 "seek_data": false, 00:17:37.635 "copy": false, 00:17:37.635 "nvme_iov_md": false 00:17:37.635 }, 00:17:37.635 "memory_domains": [ 00:17:37.635 { 00:17:37.635 "dma_device_id": "system", 00:17:37.635 "dma_device_type": 1 00:17:37.635 }, 00:17:37.635 { 00:17:37.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.635 "dma_device_type": 2 00:17:37.635 }, 00:17:37.635 { 00:17:37.635 "dma_device_id": "system", 00:17:37.635 "dma_device_type": 1 00:17:37.635 }, 00:17:37.635 { 00:17:37.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.635 "dma_device_type": 2 00:17:37.635 } 00:17:37.635 ], 00:17:37.635 "driver_specific": { 00:17:37.635 "raid": { 00:17:37.635 "uuid": "72d14dc3-42d8-11ef-9ade-d5fc5159efa5", 00:17:37.635 "strip_size_kb": 0, 00:17:37.635 "state": "online", 00:17:37.635 "raid_level": "raid1", 00:17:37.635 "superblock": true, 00:17:37.635 "num_base_bdevs": 2, 00:17:37.635 "num_base_bdevs_discovered": 2, 00:17:37.635 "num_base_bdevs_operational": 2, 00:17:37.635 "base_bdevs_list": [ 00:17:37.635 { 00:17:37.635 "name": "BaseBdev1", 00:17:37.635 "uuid": "71d8df23-42d8-11ef-9ade-d5fc5159efa5", 00:17:37.635 "is_configured": true, 00:17:37.635 "data_offset": 256, 00:17:37.635 "data_size": 7936 00:17:37.635 }, 00:17:37.635 { 00:17:37.635 "name": "BaseBdev2", 00:17:37.635 "uuid": "734fab2c-42d8-11ef-9ade-d5fc5159efa5", 00:17:37.635 "is_configured": true, 00:17:37.635 "data_offset": 256, 00:17:37.635 "data_size": 7936 00:17:37.635 } 00:17:37.635 ] 00:17:37.635 } 00:17:37.635 } 00:17:37.635 }' 00:17:37.635 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.635 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:37.635 BaseBdev2' 00:17:37.635 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.635 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.635 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.894 "name": "BaseBdev1", 00:17:37.894 "aliases": [ 00:17:37.894 "71d8df23-42d8-11ef-9ade-d5fc5159efa5" 00:17:37.894 ], 00:17:37.894 "product_name": "Malloc disk", 00:17:37.894 "block_size": 4096, 00:17:37.894 "num_blocks": 8192, 00:17:37.894 "uuid": "71d8df23-42d8-11ef-9ade-d5fc5159efa5", 00:17:37.894 "assigned_rate_limits": { 00:17:37.894 "rw_ios_per_sec": 0, 00:17:37.894 "rw_mbytes_per_sec": 0, 00:17:37.894 "r_mbytes_per_sec": 0, 00:17:37.894 "w_mbytes_per_sec": 0 00:17:37.894 }, 00:17:37.894 "claimed": true, 00:17:37.894 "claim_type": "exclusive_write", 00:17:37.894 "zoned": false, 00:17:37.894 "supported_io_types": { 00:17:37.894 "read": true, 00:17:37.894 "write": true, 00:17:37.894 "unmap": true, 00:17:37.894 "flush": true, 00:17:37.894 "reset": true, 00:17:37.894 "nvme_admin": false, 00:17:37.894 "nvme_io": false, 00:17:37.894 "nvme_io_md": false, 00:17:37.894 "write_zeroes": true, 00:17:37.894 "zcopy": true, 00:17:37.894 "get_zone_info": false, 00:17:37.894 "zone_management": false, 00:17:37.894 "zone_append": false, 00:17:37.894 "compare": false, 00:17:37.894 "compare_and_write": false, 00:17:37.894 "abort": true, 00:17:37.894 "seek_hole": false, 00:17:37.894 "seek_data": false, 00:17:37.894 "copy": true, 00:17:37.894 "nvme_iov_md": false 00:17:37.894 }, 00:17:37.894 "memory_domains": [ 00:17:37.894 { 00:17:37.894 "dma_device_id": "system", 00:17:37.894 "dma_device_type": 1 00:17:37.894 }, 00:17:37.894 { 00:17:37.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.894 "dma_device_type": 2 00:17:37.894 } 00:17:37.894 ], 00:17:37.894 "driver_specific": {} 00:17:37.894 }' 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.894 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:38.153 "name": "BaseBdev2", 00:17:38.153 "aliases": [ 00:17:38.153 "734fab2c-42d8-11ef-9ade-d5fc5159efa5" 00:17:38.153 ], 00:17:38.153 "product_name": "Malloc disk", 00:17:38.153 "block_size": 4096, 00:17:38.153 "num_blocks": 8192, 00:17:38.153 "uuid": "734fab2c-42d8-11ef-9ade-d5fc5159efa5", 00:17:38.153 "assigned_rate_limits": { 00:17:38.153 "rw_ios_per_sec": 0, 00:17:38.153 "rw_mbytes_per_sec": 0, 00:17:38.153 "r_mbytes_per_sec": 0, 00:17:38.153 "w_mbytes_per_sec": 0 00:17:38.153 }, 00:17:38.153 "claimed": true, 00:17:38.153 "claim_type": "exclusive_write", 00:17:38.153 "zoned": false, 00:17:38.153 "supported_io_types": { 00:17:38.153 "read": true, 00:17:38.153 "write": true, 00:17:38.153 "unmap": true, 00:17:38.153 "flush": true, 00:17:38.153 "reset": true, 00:17:38.153 "nvme_admin": false, 00:17:38.153 "nvme_io": false, 00:17:38.153 "nvme_io_md": false, 00:17:38.153 "write_zeroes": true, 00:17:38.153 "zcopy": true, 00:17:38.153 "get_zone_info": false, 00:17:38.153 "zone_management": false, 00:17:38.153 "zone_append": false, 00:17:38.153 "compare": false, 00:17:38.153 "compare_and_write": false, 00:17:38.153 "abort": true, 00:17:38.153 "seek_hole": false, 00:17:38.153 "seek_data": false, 00:17:38.153 "copy": true, 00:17:38.153 "nvme_iov_md": false 00:17:38.153 }, 00:17:38.153 "memory_domains": [ 00:17:38.153 { 00:17:38.153 "dma_device_id": "system", 00:17:38.153 "dma_device_type": 1 00:17:38.153 }, 00:17:38.153 { 00:17:38.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.153 "dma_device_type": 2 00:17:38.153 } 00:17:38.153 ], 00:17:38.153 "driver_specific": {} 00:17:38.153 }' 00:17:38.153 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.412 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:38.670 [2024-07-15 18:31:30.870648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:38.670 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:38.671 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:38.671 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:38.671 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.671 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.671 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.671 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.671 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.671 18:31:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.929 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.929 "name": "Existed_Raid", 00:17:38.929 "uuid": "72d14dc3-42d8-11ef-9ade-d5fc5159efa5", 00:17:38.929 "strip_size_kb": 0, 00:17:38.929 "state": "online", 00:17:38.929 "raid_level": "raid1", 00:17:38.929 "superblock": true, 00:17:38.929 "num_base_bdevs": 2, 00:17:38.929 "num_base_bdevs_discovered": 1, 00:17:38.929 "num_base_bdevs_operational": 1, 00:17:38.929 "base_bdevs_list": [ 00:17:38.929 { 00:17:38.929 "name": null, 00:17:38.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.929 "is_configured": false, 00:17:38.929 "data_offset": 256, 00:17:38.929 "data_size": 7936 00:17:38.929 }, 00:17:38.929 { 00:17:38.929 "name": "BaseBdev2", 00:17:38.929 "uuid": "734fab2c-42d8-11ef-9ade-d5fc5159efa5", 00:17:38.929 "is_configured": true, 00:17:38.929 "data_offset": 256, 00:17:38.929 "data_size": 7936 00:17:38.929 } 00:17:38.929 ] 00:17:38.929 }' 00:17:38.929 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.929 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.188 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:39.188 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:39.188 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:39.188 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.446 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:39.446 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.446 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:39.705 [2024-07-15 18:31:31.927475] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.705 [2024-07-15 18:31:31.927536] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.705 [2024-07-15 18:31:31.933347] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.705 [2024-07-15 18:31:31.933368] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.705 [2024-07-15 18:31:31.933373] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3231dc34a00 name Existed_Raid, state offline 00:17:39.705 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:39.705 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:39.705 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.705 18:31:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 65606 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 65606 ']' 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 65606 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # tail -1 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65606 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:39.964 killing process with pid 65606 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65606' 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 65606 00:17:39.964 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 65606 00:17:39.964 [2024-07-15 18:31:32.185419] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.964 [2024-07-15 18:31:32.185493] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.222 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:17:40.222 00:17:40.222 real 0m8.840s 00:17:40.222 user 0m15.264s 00:17:40.222 sys 0m1.657s 00:17:40.222 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.222 ************************************ 00:17:40.222 END TEST raid_state_function_test_sb_4k 00:17:40.222 ************************************ 00:17:40.222 18:31:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.222 18:31:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:40.222 18:31:32 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:40.222 18:31:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:40.222 18:31:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.222 18:31:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.222 ************************************ 00:17:40.222 START TEST raid_superblock_test_4k 00:17:40.222 ************************************ 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65880 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65880 /var/tmp/spdk-raid.sock 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 65880 ']' 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.222 18:31:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.222 [2024-07-15 18:31:32.439211] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:17:40.222 [2024-07-15 18:31:32.439478] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:40.788 EAL: TSC is not safe to use in SMP mode 00:17:40.788 EAL: TSC is not invariant 00:17:40.788 [2024-07-15 18:31:33.048297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.788 [2024-07-15 18:31:33.165047] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:40.788 [2024-07-15 18:31:33.167643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.788 [2024-07-15 18:31:33.168606] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.788 [2024-07-15 18:31:33.168623] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:41.354 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:17:41.354 malloc1 00:17:41.613 18:31:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:41.887 [2024-07-15 18:31:34.022327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:41.887 [2024-07-15 18:31:34.022391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.887 [2024-07-15 18:31:34.022405] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2edfde34780 00:17:41.887 [2024-07-15 18:31:34.022413] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.887 [2024-07-15 18:31:34.023517] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.887 [2024-07-15 18:31:34.023562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:41.887 pt1 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:41.887 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:17:42.145 malloc2 00:17:42.145 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.402 [2024-07-15 18:31:34.562350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.402 [2024-07-15 18:31:34.562412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.402 [2024-07-15 18:31:34.562425] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2edfde34c80 00:17:42.402 [2024-07-15 18:31:34.562433] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.402 [2024-07-15 18:31:34.563158] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.402 [2024-07-15 18:31:34.563183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.402 pt2 00:17:42.403 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:42.403 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:42.403 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:42.403 [2024-07-15 18:31:34.798366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:42.403 [2024-07-15 18:31:34.798998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.403 [2024-07-15 18:31:34.799075] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2edfde34f00 00:17:42.403 [2024-07-15 18:31:34.799081] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:42.403 [2024-07-15 18:31:34.799121] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2edfde97e20 00:17:42.403 [2024-07-15 18:31:34.799205] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2edfde34f00 00:17:42.403 [2024-07-15 18:31:34.799209] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2edfde34f00 00:17:42.403 [2024-07-15 18:31:34.799237] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.660 18:31:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.660 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.660 "name": "raid_bdev1", 00:17:42.660 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:42.660 "strip_size_kb": 0, 00:17:42.660 "state": "online", 00:17:42.660 "raid_level": "raid1", 00:17:42.660 "superblock": true, 00:17:42.660 "num_base_bdevs": 2, 00:17:42.660 "num_base_bdevs_discovered": 2, 00:17:42.660 "num_base_bdevs_operational": 2, 00:17:42.660 "base_bdevs_list": [ 00:17:42.660 { 00:17:42.660 "name": "pt1", 00:17:42.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.660 "is_configured": true, 00:17:42.660 "data_offset": 256, 00:17:42.660 "data_size": 7936 00:17:42.660 }, 00:17:42.660 { 00:17:42.660 "name": "pt2", 00:17:42.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.660 "is_configured": true, 00:17:42.660 "data_offset": 256, 00:17:42.660 "data_size": 7936 00:17:42.660 } 00:17:42.660 ] 00:17:42.660 }' 00:17:42.660 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.660 18:31:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.225 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:43.225 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:43.225 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:43.225 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:43.225 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:43.225 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:43.225 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:43.225 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:43.483 [2024-07-15 18:31:35.626459] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.483 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:43.483 "name": "raid_bdev1", 00:17:43.483 "aliases": [ 00:17:43.483 "76f618d3-42d8-11ef-9ade-d5fc5159efa5" 00:17:43.483 ], 00:17:43.483 "product_name": "Raid Volume", 00:17:43.483 "block_size": 4096, 00:17:43.483 "num_blocks": 7936, 00:17:43.483 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:43.483 "assigned_rate_limits": { 00:17:43.483 "rw_ios_per_sec": 0, 00:17:43.483 "rw_mbytes_per_sec": 0, 00:17:43.483 "r_mbytes_per_sec": 0, 00:17:43.483 "w_mbytes_per_sec": 0 00:17:43.483 }, 00:17:43.483 "claimed": false, 00:17:43.483 "zoned": false, 00:17:43.483 "supported_io_types": { 00:17:43.483 "read": true, 00:17:43.483 "write": true, 00:17:43.483 "unmap": false, 00:17:43.483 "flush": false, 00:17:43.483 "reset": true, 00:17:43.483 "nvme_admin": false, 00:17:43.483 "nvme_io": false, 00:17:43.483 "nvme_io_md": false, 00:17:43.483 "write_zeroes": true, 00:17:43.483 "zcopy": false, 00:17:43.483 "get_zone_info": false, 00:17:43.483 "zone_management": false, 00:17:43.483 "zone_append": false, 00:17:43.483 "compare": false, 00:17:43.483 "compare_and_write": false, 00:17:43.483 "abort": false, 00:17:43.483 "seek_hole": false, 00:17:43.483 "seek_data": false, 00:17:43.483 "copy": false, 00:17:43.483 "nvme_iov_md": false 00:17:43.483 }, 00:17:43.483 "memory_domains": [ 00:17:43.483 { 00:17:43.483 "dma_device_id": "system", 00:17:43.483 "dma_device_type": 1 00:17:43.483 }, 00:17:43.483 { 00:17:43.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.483 "dma_device_type": 2 00:17:43.483 }, 00:17:43.483 { 00:17:43.483 "dma_device_id": "system", 00:17:43.483 "dma_device_type": 1 00:17:43.483 }, 00:17:43.483 { 00:17:43.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.483 "dma_device_type": 2 00:17:43.483 } 00:17:43.483 ], 00:17:43.483 "driver_specific": { 00:17:43.483 "raid": { 00:17:43.483 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:43.483 "strip_size_kb": 0, 00:17:43.483 "state": "online", 00:17:43.483 "raid_level": "raid1", 00:17:43.483 "superblock": true, 00:17:43.483 "num_base_bdevs": 2, 00:17:43.483 "num_base_bdevs_discovered": 2, 00:17:43.483 "num_base_bdevs_operational": 2, 00:17:43.483 "base_bdevs_list": [ 00:17:43.483 { 00:17:43.483 "name": "pt1", 00:17:43.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.483 "is_configured": true, 00:17:43.483 "data_offset": 256, 00:17:43.483 "data_size": 7936 00:17:43.483 }, 00:17:43.483 { 00:17:43.483 "name": "pt2", 00:17:43.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.483 "is_configured": true, 00:17:43.483 "data_offset": 256, 00:17:43.483 "data_size": 7936 00:17:43.483 } 00:17:43.483 ] 00:17:43.483 } 00:17:43.483 } 00:17:43.483 }' 00:17:43.483 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.483 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:43.483 pt2' 00:17:43.483 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.483 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:43.483 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.740 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.740 "name": "pt1", 00:17:43.740 "aliases": [ 00:17:43.740 "00000000-0000-0000-0000-000000000001" 00:17:43.740 ], 00:17:43.740 "product_name": "passthru", 00:17:43.740 "block_size": 4096, 00:17:43.740 "num_blocks": 8192, 00:17:43.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.740 "assigned_rate_limits": { 00:17:43.740 "rw_ios_per_sec": 0, 00:17:43.740 "rw_mbytes_per_sec": 0, 00:17:43.740 "r_mbytes_per_sec": 0, 00:17:43.740 "w_mbytes_per_sec": 0 00:17:43.740 }, 00:17:43.740 "claimed": true, 00:17:43.740 "claim_type": "exclusive_write", 00:17:43.740 "zoned": false, 00:17:43.740 "supported_io_types": { 00:17:43.740 "read": true, 00:17:43.740 "write": true, 00:17:43.740 "unmap": true, 00:17:43.740 "flush": true, 00:17:43.740 "reset": true, 00:17:43.740 "nvme_admin": false, 00:17:43.740 "nvme_io": false, 00:17:43.740 "nvme_io_md": false, 00:17:43.740 "write_zeroes": true, 00:17:43.740 "zcopy": true, 00:17:43.740 "get_zone_info": false, 00:17:43.740 "zone_management": false, 00:17:43.740 "zone_append": false, 00:17:43.740 "compare": false, 00:17:43.740 "compare_and_write": false, 00:17:43.740 "abort": true, 00:17:43.740 "seek_hole": false, 00:17:43.740 "seek_data": false, 00:17:43.740 "copy": true, 00:17:43.740 "nvme_iov_md": false 00:17:43.740 }, 00:17:43.740 "memory_domains": [ 00:17:43.740 { 00:17:43.740 "dma_device_id": "system", 00:17:43.740 "dma_device_type": 1 00:17:43.740 }, 00:17:43.740 { 00:17:43.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.740 "dma_device_type": 2 00:17:43.740 } 00:17:43.740 ], 00:17:43.740 "driver_specific": { 00:17:43.740 "passthru": { 00:17:43.740 "name": "pt1", 00:17:43.740 "base_bdev_name": "malloc1" 00:17:43.740 } 00:17:43.740 } 00:17:43.740 }' 00:17:43.740 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.740 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.740 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:43.740 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.740 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.740 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:43.741 18:31:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.998 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.998 "name": "pt2", 00:17:43.998 "aliases": [ 00:17:43.998 "00000000-0000-0000-0000-000000000002" 00:17:43.998 ], 00:17:43.998 "product_name": "passthru", 00:17:43.998 "block_size": 4096, 00:17:43.998 "num_blocks": 8192, 00:17:43.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.998 "assigned_rate_limits": { 00:17:43.998 "rw_ios_per_sec": 0, 00:17:43.998 "rw_mbytes_per_sec": 0, 00:17:43.998 "r_mbytes_per_sec": 0, 00:17:43.998 "w_mbytes_per_sec": 0 00:17:43.998 }, 00:17:43.998 "claimed": true, 00:17:43.998 "claim_type": "exclusive_write", 00:17:43.998 "zoned": false, 00:17:43.998 "supported_io_types": { 00:17:43.998 "read": true, 00:17:43.998 "write": true, 00:17:43.998 "unmap": true, 00:17:43.998 "flush": true, 00:17:43.998 "reset": true, 00:17:43.998 "nvme_admin": false, 00:17:43.998 "nvme_io": false, 00:17:43.999 "nvme_io_md": false, 00:17:43.999 "write_zeroes": true, 00:17:43.999 "zcopy": true, 00:17:43.999 "get_zone_info": false, 00:17:43.999 "zone_management": false, 00:17:43.999 "zone_append": false, 00:17:43.999 "compare": false, 00:17:43.999 "compare_and_write": false, 00:17:43.999 "abort": true, 00:17:43.999 "seek_hole": false, 00:17:43.999 "seek_data": false, 00:17:43.999 "copy": true, 00:17:43.999 "nvme_iov_md": false 00:17:43.999 }, 00:17:43.999 "memory_domains": [ 00:17:43.999 { 00:17:43.999 "dma_device_id": "system", 00:17:43.999 "dma_device_type": 1 00:17:43.999 }, 00:17:43.999 { 00:17:43.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.999 "dma_device_type": 2 00:17:43.999 } 00:17:43.999 ], 00:17:43.999 "driver_specific": { 00:17:43.999 "passthru": { 00:17:43.999 "name": "pt2", 00:17:43.999 "base_bdev_name": "malloc2" 00:17:43.999 } 00:17:43.999 } 00:17:43.999 }' 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:43.999 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:44.256 [2024-07-15 18:31:36.462539] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.256 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=76f618d3-42d8-11ef-9ade-d5fc5159efa5 00:17:44.256 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 76f618d3-42d8-11ef-9ade-d5fc5159efa5 ']' 00:17:44.256 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:44.512 [2024-07-15 18:31:36.742494] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.512 [2024-07-15 18:31:36.742519] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.512 [2024-07-15 18:31:36.742558] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.512 [2024-07-15 18:31:36.742573] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.512 [2024-07-15 18:31:36.742578] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edfde34f00 name raid_bdev1, state offline 00:17:44.512 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.512 18:31:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:44.769 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:44.769 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:44.769 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.769 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:45.026 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:45.026 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:45.283 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:45.283 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:45.540 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:45.798 [2024-07-15 18:31:37.966608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:45.798 [2024-07-15 18:31:37.967242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:45.798 [2024-07-15 18:31:37.967269] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:45.798 [2024-07-15 18:31:37.967305] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:45.798 [2024-07-15 18:31:37.967316] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.798 [2024-07-15 18:31:37.967321] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edfde34c80 name raid_bdev1, state configuring 00:17:45.798 request: 00:17:45.798 { 00:17:45.798 "name": "raid_bdev1", 00:17:45.798 "raid_level": "raid1", 00:17:45.798 "base_bdevs": [ 00:17:45.798 "malloc1", 00:17:45.798 "malloc2" 00:17:45.798 ], 00:17:45.798 "superblock": false, 00:17:45.798 "method": "bdev_raid_create", 00:17:45.798 "req_id": 1 00:17:45.798 } 00:17:45.798 Got JSON-RPC error response 00:17:45.798 response: 00:17:45.798 { 00:17:45.798 "code": -17, 00:17:45.798 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:45.798 } 00:17:45.798 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:17:45.798 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:45.798 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:45.798 18:31:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:45.798 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.798 18:31:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:46.055 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:46.055 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:46.055 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.312 [2024-07-15 18:31:38.474651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.312 [2024-07-15 18:31:38.474707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.312 [2024-07-15 18:31:38.474719] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2edfde34780 00:17:46.312 [2024-07-15 18:31:38.474727] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.312 [2024-07-15 18:31:38.475430] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.312 [2024-07-15 18:31:38.475449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.312 [2024-07-15 18:31:38.475476] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:46.312 [2024-07-15 18:31:38.475487] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:46.312 pt1 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.312 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.571 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.571 "name": "raid_bdev1", 00:17:46.571 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:46.571 "strip_size_kb": 0, 00:17:46.571 "state": "configuring", 00:17:46.571 "raid_level": "raid1", 00:17:46.571 "superblock": true, 00:17:46.571 "num_base_bdevs": 2, 00:17:46.571 "num_base_bdevs_discovered": 1, 00:17:46.571 "num_base_bdevs_operational": 2, 00:17:46.571 "base_bdevs_list": [ 00:17:46.571 { 00:17:46.571 "name": "pt1", 00:17:46.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:46.571 "is_configured": true, 00:17:46.571 "data_offset": 256, 00:17:46.571 "data_size": 7936 00:17:46.571 }, 00:17:46.571 { 00:17:46.571 "name": null, 00:17:46.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.571 "is_configured": false, 00:17:46.571 "data_offset": 256, 00:17:46.571 "data_size": 7936 00:17:46.571 } 00:17:46.571 ] 00:17:46.571 }' 00:17:46.571 18:31:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.571 18:31:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.864 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:46.864 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:46.864 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:46.864 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.121 [2024-07-15 18:31:39.318716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.121 [2024-07-15 18:31:39.318774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.121 [2024-07-15 18:31:39.318787] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2edfde34f00 00:17:47.121 [2024-07-15 18:31:39.318795] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.122 [2024-07-15 18:31:39.318922] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.122 [2024-07-15 18:31:39.318934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.122 [2024-07-15 18:31:39.318957] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:47.122 [2024-07-15 18:31:39.318966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.122 [2024-07-15 18:31:39.318995] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2edfde35180 00:17:47.122 [2024-07-15 18:31:39.319000] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:47.122 [2024-07-15 18:31:39.319019] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2edfde97e20 00:17:47.122 [2024-07-15 18:31:39.319074] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2edfde35180 00:17:47.122 [2024-07-15 18:31:39.319079] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2edfde35180 00:17:47.122 [2024-07-15 18:31:39.319102] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.122 pt2 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.122 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.381 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.381 "name": "raid_bdev1", 00:17:47.381 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:47.381 "strip_size_kb": 0, 00:17:47.381 "state": "online", 00:17:47.381 "raid_level": "raid1", 00:17:47.381 "superblock": true, 00:17:47.381 "num_base_bdevs": 2, 00:17:47.381 "num_base_bdevs_discovered": 2, 00:17:47.381 "num_base_bdevs_operational": 2, 00:17:47.381 "base_bdevs_list": [ 00:17:47.381 { 00:17:47.381 "name": "pt1", 00:17:47.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.381 "is_configured": true, 00:17:47.381 "data_offset": 256, 00:17:47.381 "data_size": 7936 00:17:47.381 }, 00:17:47.381 { 00:17:47.381 "name": "pt2", 00:17:47.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.381 "is_configured": true, 00:17:47.381 "data_offset": 256, 00:17:47.381 "data_size": 7936 00:17:47.381 } 00:17:47.381 ] 00:17:47.381 }' 00:17:47.381 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.381 18:31:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.639 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:47.639 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:47.639 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:47.639 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:47.639 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:47.639 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:47.639 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:47.639 18:31:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:47.897 [2024-07-15 18:31:40.182825] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.897 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:47.897 "name": "raid_bdev1", 00:17:47.897 "aliases": [ 00:17:47.897 "76f618d3-42d8-11ef-9ade-d5fc5159efa5" 00:17:47.897 ], 00:17:47.897 "product_name": "Raid Volume", 00:17:47.897 "block_size": 4096, 00:17:47.897 "num_blocks": 7936, 00:17:47.897 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:47.897 "assigned_rate_limits": { 00:17:47.897 "rw_ios_per_sec": 0, 00:17:47.897 "rw_mbytes_per_sec": 0, 00:17:47.897 "r_mbytes_per_sec": 0, 00:17:47.897 "w_mbytes_per_sec": 0 00:17:47.898 }, 00:17:47.898 "claimed": false, 00:17:47.898 "zoned": false, 00:17:47.898 "supported_io_types": { 00:17:47.898 "read": true, 00:17:47.898 "write": true, 00:17:47.898 "unmap": false, 00:17:47.898 "flush": false, 00:17:47.898 "reset": true, 00:17:47.898 "nvme_admin": false, 00:17:47.898 "nvme_io": false, 00:17:47.898 "nvme_io_md": false, 00:17:47.898 "write_zeroes": true, 00:17:47.898 "zcopy": false, 00:17:47.898 "get_zone_info": false, 00:17:47.898 "zone_management": false, 00:17:47.898 "zone_append": false, 00:17:47.898 "compare": false, 00:17:47.898 "compare_and_write": false, 00:17:47.898 "abort": false, 00:17:47.898 "seek_hole": false, 00:17:47.898 "seek_data": false, 00:17:47.898 "copy": false, 00:17:47.898 "nvme_iov_md": false 00:17:47.898 }, 00:17:47.898 "memory_domains": [ 00:17:47.898 { 00:17:47.898 "dma_device_id": "system", 00:17:47.898 "dma_device_type": 1 00:17:47.898 }, 00:17:47.898 { 00:17:47.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.898 "dma_device_type": 2 00:17:47.898 }, 00:17:47.898 { 00:17:47.898 "dma_device_id": "system", 00:17:47.898 "dma_device_type": 1 00:17:47.898 }, 00:17:47.898 { 00:17:47.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.898 "dma_device_type": 2 00:17:47.898 } 00:17:47.898 ], 00:17:47.898 "driver_specific": { 00:17:47.898 "raid": { 00:17:47.898 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:47.898 "strip_size_kb": 0, 00:17:47.898 "state": "online", 00:17:47.898 "raid_level": "raid1", 00:17:47.898 "superblock": true, 00:17:47.898 "num_base_bdevs": 2, 00:17:47.898 "num_base_bdevs_discovered": 2, 00:17:47.898 "num_base_bdevs_operational": 2, 00:17:47.898 "base_bdevs_list": [ 00:17:47.898 { 00:17:47.898 "name": "pt1", 00:17:47.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.898 "is_configured": true, 00:17:47.898 "data_offset": 256, 00:17:47.898 "data_size": 7936 00:17:47.898 }, 00:17:47.898 { 00:17:47.898 "name": "pt2", 00:17:47.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.898 "is_configured": true, 00:17:47.898 "data_offset": 256, 00:17:47.898 "data_size": 7936 00:17:47.898 } 00:17:47.898 ] 00:17:47.898 } 00:17:47.898 } 00:17:47.898 }' 00:17:47.898 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.898 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:47.898 pt2' 00:17:47.898 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:47.898 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:47.898 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.156 "name": "pt1", 00:17:48.156 "aliases": [ 00:17:48.156 "00000000-0000-0000-0000-000000000001" 00:17:48.156 ], 00:17:48.156 "product_name": "passthru", 00:17:48.156 "block_size": 4096, 00:17:48.156 "num_blocks": 8192, 00:17:48.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.156 "assigned_rate_limits": { 00:17:48.156 "rw_ios_per_sec": 0, 00:17:48.156 "rw_mbytes_per_sec": 0, 00:17:48.156 "r_mbytes_per_sec": 0, 00:17:48.156 "w_mbytes_per_sec": 0 00:17:48.156 }, 00:17:48.156 "claimed": true, 00:17:48.156 "claim_type": "exclusive_write", 00:17:48.156 "zoned": false, 00:17:48.156 "supported_io_types": { 00:17:48.156 "read": true, 00:17:48.156 "write": true, 00:17:48.156 "unmap": true, 00:17:48.156 "flush": true, 00:17:48.156 "reset": true, 00:17:48.156 "nvme_admin": false, 00:17:48.156 "nvme_io": false, 00:17:48.156 "nvme_io_md": false, 00:17:48.156 "write_zeroes": true, 00:17:48.156 "zcopy": true, 00:17:48.156 "get_zone_info": false, 00:17:48.156 "zone_management": false, 00:17:48.156 "zone_append": false, 00:17:48.156 "compare": false, 00:17:48.156 "compare_and_write": false, 00:17:48.156 "abort": true, 00:17:48.156 "seek_hole": false, 00:17:48.156 "seek_data": false, 00:17:48.156 "copy": true, 00:17:48.156 "nvme_iov_md": false 00:17:48.156 }, 00:17:48.156 "memory_domains": [ 00:17:48.156 { 00:17:48.156 "dma_device_id": "system", 00:17:48.156 "dma_device_type": 1 00:17:48.156 }, 00:17:48.156 { 00:17:48.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.156 "dma_device_type": 2 00:17:48.156 } 00:17:48.156 ], 00:17:48.156 "driver_specific": { 00:17:48.156 "passthru": { 00:17:48.156 "name": "pt1", 00:17:48.156 "base_bdev_name": "malloc1" 00:17:48.156 } 00:17:48.156 } 00:17:48.156 }' 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:48.156 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.414 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.414 "name": "pt2", 00:17:48.414 "aliases": [ 00:17:48.414 "00000000-0000-0000-0000-000000000002" 00:17:48.414 ], 00:17:48.414 "product_name": "passthru", 00:17:48.414 "block_size": 4096, 00:17:48.414 "num_blocks": 8192, 00:17:48.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.414 "assigned_rate_limits": { 00:17:48.414 "rw_ios_per_sec": 0, 00:17:48.414 "rw_mbytes_per_sec": 0, 00:17:48.414 "r_mbytes_per_sec": 0, 00:17:48.414 "w_mbytes_per_sec": 0 00:17:48.414 }, 00:17:48.414 "claimed": true, 00:17:48.414 "claim_type": "exclusive_write", 00:17:48.414 "zoned": false, 00:17:48.414 "supported_io_types": { 00:17:48.414 "read": true, 00:17:48.414 "write": true, 00:17:48.414 "unmap": true, 00:17:48.414 "flush": true, 00:17:48.414 "reset": true, 00:17:48.414 "nvme_admin": false, 00:17:48.414 "nvme_io": false, 00:17:48.414 "nvme_io_md": false, 00:17:48.414 "write_zeroes": true, 00:17:48.414 "zcopy": true, 00:17:48.414 "get_zone_info": false, 00:17:48.414 "zone_management": false, 00:17:48.414 "zone_append": false, 00:17:48.414 "compare": false, 00:17:48.414 "compare_and_write": false, 00:17:48.414 "abort": true, 00:17:48.414 "seek_hole": false, 00:17:48.414 "seek_data": false, 00:17:48.414 "copy": true, 00:17:48.414 "nvme_iov_md": false 00:17:48.414 }, 00:17:48.414 "memory_domains": [ 00:17:48.414 { 00:17:48.414 "dma_device_id": "system", 00:17:48.414 "dma_device_type": 1 00:17:48.414 }, 00:17:48.414 { 00:17:48.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.414 "dma_device_type": 2 00:17:48.414 } 00:17:48.414 ], 00:17:48.414 "driver_specific": { 00:17:48.414 "passthru": { 00:17:48.414 "name": "pt2", 00:17:48.414 "base_bdev_name": "malloc2" 00:17:48.414 } 00:17:48.414 } 00:17:48.414 }' 00:17:48.414 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.414 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.414 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:48.414 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:48.674 18:31:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:48.932 [2024-07-15 18:31:41.082903] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.932 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 76f618d3-42d8-11ef-9ade-d5fc5159efa5 '!=' 76f618d3-42d8-11ef-9ade-d5fc5159efa5 ']' 00:17:48.932 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:48.932 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:48.932 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:17:48.932 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:48.932 [2024-07-15 18:31:41.322879] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.191 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.450 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.450 "name": "raid_bdev1", 00:17:49.450 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:49.450 "strip_size_kb": 0, 00:17:49.450 "state": "online", 00:17:49.450 "raid_level": "raid1", 00:17:49.450 "superblock": true, 00:17:49.450 "num_base_bdevs": 2, 00:17:49.450 "num_base_bdevs_discovered": 1, 00:17:49.450 "num_base_bdevs_operational": 1, 00:17:49.450 "base_bdevs_list": [ 00:17:49.450 { 00:17:49.450 "name": null, 00:17:49.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.450 "is_configured": false, 00:17:49.450 "data_offset": 256, 00:17:49.450 "data_size": 7936 00:17:49.450 }, 00:17:49.450 { 00:17:49.450 "name": "pt2", 00:17:49.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.450 "is_configured": true, 00:17:49.450 "data_offset": 256, 00:17:49.450 "data_size": 7936 00:17:49.450 } 00:17:49.450 ] 00:17:49.450 }' 00:17:49.450 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.450 18:31:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.709 18:31:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:49.968 [2024-07-15 18:31:42.142969] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.968 [2024-07-15 18:31:42.142992] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.968 [2024-07-15 18:31:42.143031] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.968 [2024-07-15 18:31:42.143043] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.968 [2024-07-15 18:31:42.143047] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edfde35180 name raid_bdev1, state offline 00:17:49.968 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.968 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:50.226 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:50.226 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:50.226 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:50.226 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:50.226 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:50.485 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:50.485 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:50.485 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:50.485 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:50.485 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:17:50.485 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.744 [2024-07-15 18:31:42.911036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.744 [2024-07-15 18:31:42.911107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.744 [2024-07-15 18:31:42.911120] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2edfde34f00 00:17:50.744 [2024-07-15 18:31:42.911128] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.744 [2024-07-15 18:31:42.911855] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.744 [2024-07-15 18:31:42.911880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.744 [2024-07-15 18:31:42.911906] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:50.744 [2024-07-15 18:31:42.911917] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.744 [2024-07-15 18:31:42.911943] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2edfde35180 00:17:50.744 [2024-07-15 18:31:42.911947] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.744 [2024-07-15 18:31:42.911968] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2edfde97e20 00:17:50.744 [2024-07-15 18:31:42.912018] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2edfde35180 00:17:50.744 [2024-07-15 18:31:42.912022] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2edfde35180 00:17:50.744 [2024-07-15 18:31:42.912051] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.744 pt2 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.744 18:31:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.003 18:31:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.003 "name": "raid_bdev1", 00:17:51.003 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:51.003 "strip_size_kb": 0, 00:17:51.003 "state": "online", 00:17:51.003 "raid_level": "raid1", 00:17:51.003 "superblock": true, 00:17:51.003 "num_base_bdevs": 2, 00:17:51.003 "num_base_bdevs_discovered": 1, 00:17:51.003 "num_base_bdevs_operational": 1, 00:17:51.003 "base_bdevs_list": [ 00:17:51.003 { 00:17:51.003 "name": null, 00:17:51.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.003 "is_configured": false, 00:17:51.003 "data_offset": 256, 00:17:51.003 "data_size": 7936 00:17:51.003 }, 00:17:51.003 { 00:17:51.003 "name": "pt2", 00:17:51.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.003 "is_configured": true, 00:17:51.003 "data_offset": 256, 00:17:51.003 "data_size": 7936 00:17:51.003 } 00:17:51.003 ] 00:17:51.003 }' 00:17:51.003 18:31:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.003 18:31:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 18:31:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:51.520 [2024-07-15 18:31:43.739093] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.520 [2024-07-15 18:31:43.739115] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.520 [2024-07-15 18:31:43.739138] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.520 [2024-07-15 18:31:43.739151] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.520 [2024-07-15 18:31:43.739155] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edfde35180 name raid_bdev1, state offline 00:17:51.520 18:31:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.520 18:31:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:51.779 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:51.779 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:51.779 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:51.779 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.038 [2024-07-15 18:31:44.279146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.038 [2024-07-15 18:31:44.279200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.038 [2024-07-15 18:31:44.279213] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2edfde34c80 00:17:52.038 [2024-07-15 18:31:44.279221] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.038 [2024-07-15 18:31:44.279923] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.038 [2024-07-15 18:31:44.279948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.038 [2024-07-15 18:31:44.279974] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.038 [2024-07-15 18:31:44.279987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.038 [2024-07-15 18:31:44.280018] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:52.038 [2024-07-15 18:31:44.280022] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.038 [2024-07-15 18:31:44.280027] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edfde34780 name raid_bdev1, state configuring 00:17:52.038 [2024-07-15 18:31:44.280034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.038 [2024-07-15 18:31:44.280049] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2edfde34780 00:17:52.038 [2024-07-15 18:31:44.280052] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:52.038 [2024-07-15 18:31:44.280073] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2edfde97e20 00:17:52.038 [2024-07-15 18:31:44.280122] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2edfde34780 00:17:52.038 [2024-07-15 18:31:44.280126] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2edfde34780 00:17:52.038 [2024-07-15 18:31:44.280147] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.038 pt1 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.038 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.297 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.297 "name": "raid_bdev1", 00:17:52.297 "uuid": "76f618d3-42d8-11ef-9ade-d5fc5159efa5", 00:17:52.297 "strip_size_kb": 0, 00:17:52.297 "state": "online", 00:17:52.297 "raid_level": "raid1", 00:17:52.297 "superblock": true, 00:17:52.297 "num_base_bdevs": 2, 00:17:52.297 "num_base_bdevs_discovered": 1, 00:17:52.297 "num_base_bdevs_operational": 1, 00:17:52.297 "base_bdevs_list": [ 00:17:52.297 { 00:17:52.297 "name": null, 00:17:52.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.297 "is_configured": false, 00:17:52.297 "data_offset": 256, 00:17:52.297 "data_size": 7936 00:17:52.297 }, 00:17:52.297 { 00:17:52.297 "name": "pt2", 00:17:52.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.297 "is_configured": true, 00:17:52.297 "data_offset": 256, 00:17:52.297 "data_size": 7936 00:17:52.297 } 00:17:52.297 ] 00:17:52.297 }' 00:17:52.297 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.297 18:31:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.555 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:52.555 18:31:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:52.814 18:31:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:52.814 18:31:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:52.814 18:31:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:53.073 [2024-07-15 18:31:45.343260] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 76f618d3-42d8-11ef-9ade-d5fc5159efa5 '!=' 76f618d3-42d8-11ef-9ade-d5fc5159efa5 ']' 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65880 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 65880 ']' 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 65880 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # tail -1 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65880 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:53.073 killing process with pid 65880 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65880' 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 65880 00:17:53.073 [2024-07-15 18:31:45.371943] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.073 [2024-07-15 18:31:45.371967] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.073 [2024-07-15 18:31:45.371979] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.073 [2024-07-15 18:31:45.371983] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edfde34780 name raid_bdev1, state offline 00:17:53.073 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 65880 00:17:53.073 [2024-07-15 18:31:45.385574] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.332 18:31:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:17:53.332 00:17:53.332 real 0m13.173s 00:17:53.332 user 0m23.475s 00:17:53.332 sys 0m2.052s 00:17:53.332 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:53.332 ************************************ 00:17:53.332 18:31:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.332 END TEST raid_superblock_test_4k 00:17:53.332 ************************************ 00:17:53.332 18:31:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:53.332 18:31:45 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:17:53.332 18:31:45 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:17:53.332 18:31:45 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:53.332 18:31:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:53.332 18:31:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:53.332 18:31:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.332 ************************************ 00:17:53.332 START TEST raid_state_function_test_sb_md_separate 00:17:53.332 ************************************ 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=66267 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:53.332 Process raid pid: 66267 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66267' 00:17:53.332 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 66267 /var/tmp/spdk-raid.sock 00:17:53.333 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66267 ']' 00:17:53.333 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:53.333 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:53.333 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:53.333 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.333 18:31:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.333 [2024-07-15 18:31:45.657307] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:17:53.333 [2024-07-15 18:31:45.657548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:53.901 EAL: TSC is not safe to use in SMP mode 00:17:53.901 EAL: TSC is not invariant 00:17:53.901 [2024-07-15 18:31:46.254792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.194 [2024-07-15 18:31:46.362754] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:54.194 [2024-07-15 18:31:46.364921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.194 [2024-07-15 18:31:46.365717] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.194 [2024-07-15 18:31:46.365732] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.453 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.453 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:17:54.453 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:54.712 [2024-07-15 18:31:46.950274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.712 [2024-07-15 18:31:46.950349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.712 [2024-07-15 18:31:46.950355] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.712 [2024-07-15 18:31:46.950364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.712 18:31:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.971 18:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.971 "name": "Existed_Raid", 00:17:54.971 "uuid": "7e3454c2-42d8-11ef-9ade-d5fc5159efa5", 00:17:54.971 "strip_size_kb": 0, 00:17:54.971 "state": "configuring", 00:17:54.971 "raid_level": "raid1", 00:17:54.971 "superblock": true, 00:17:54.971 "num_base_bdevs": 2, 00:17:54.971 "num_base_bdevs_discovered": 0, 00:17:54.971 "num_base_bdevs_operational": 2, 00:17:54.971 "base_bdevs_list": [ 00:17:54.971 { 00:17:54.971 "name": "BaseBdev1", 00:17:54.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.971 "is_configured": false, 00:17:54.971 "data_offset": 0, 00:17:54.971 "data_size": 0 00:17:54.971 }, 00:17:54.971 { 00:17:54.971 "name": "BaseBdev2", 00:17:54.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.971 "is_configured": false, 00:17:54.971 "data_offset": 0, 00:17:54.971 "data_size": 0 00:17:54.971 } 00:17:54.971 ] 00:17:54.971 }' 00:17:54.971 18:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.971 18:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 18:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:55.488 [2024-07-15 18:31:47.830321] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.488 [2024-07-15 18:31:47.830345] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3fce834500 name Existed_Raid, state configuring 00:17:55.488 18:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:55.746 [2024-07-15 18:31:48.070348] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.747 [2024-07-15 18:31:48.070399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.747 [2024-07-15 18:31:48.070404] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.747 [2024-07-15 18:31:48.070413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.747 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:56.005 [2024-07-15 18:31:48.299406] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.005 BaseBdev1 00:17:56.005 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:56.005 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:56.005 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:56.005 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:17:56.005 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:56.005 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:56.005 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.264 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:56.523 [ 00:17:56.523 { 00:17:56.523 "name": "BaseBdev1", 00:17:56.523 "aliases": [ 00:17:56.523 "7f020926-42d8-11ef-9ade-d5fc5159efa5" 00:17:56.523 ], 00:17:56.523 "product_name": "Malloc disk", 00:17:56.523 "block_size": 4096, 00:17:56.523 "num_blocks": 8192, 00:17:56.523 "uuid": "7f020926-42d8-11ef-9ade-d5fc5159efa5", 00:17:56.523 "md_size": 32, 00:17:56.523 "md_interleave": false, 00:17:56.523 "dif_type": 0, 00:17:56.523 "assigned_rate_limits": { 00:17:56.523 "rw_ios_per_sec": 0, 00:17:56.523 "rw_mbytes_per_sec": 0, 00:17:56.523 "r_mbytes_per_sec": 0, 00:17:56.523 "w_mbytes_per_sec": 0 00:17:56.523 }, 00:17:56.523 "claimed": true, 00:17:56.523 "claim_type": "exclusive_write", 00:17:56.523 "zoned": false, 00:17:56.523 "supported_io_types": { 00:17:56.523 "read": true, 00:17:56.523 "write": true, 00:17:56.523 "unmap": true, 00:17:56.523 "flush": true, 00:17:56.523 "reset": true, 00:17:56.523 "nvme_admin": false, 00:17:56.523 "nvme_io": false, 00:17:56.523 "nvme_io_md": false, 00:17:56.523 "write_zeroes": true, 00:17:56.523 "zcopy": true, 00:17:56.523 "get_zone_info": false, 00:17:56.523 "zone_management": false, 00:17:56.523 "zone_append": false, 00:17:56.523 "compare": false, 00:17:56.523 "compare_and_write": false, 00:17:56.523 "abort": true, 00:17:56.523 "seek_hole": false, 00:17:56.523 "seek_data": false, 00:17:56.523 "copy": true, 00:17:56.523 "nvme_iov_md": false 00:17:56.523 }, 00:17:56.523 "memory_domains": [ 00:17:56.523 { 00:17:56.523 "dma_device_id": "system", 00:17:56.523 "dma_device_type": 1 00:17:56.523 }, 00:17:56.523 { 00:17:56.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.523 "dma_device_type": 2 00:17:56.523 } 00:17:56.523 ], 00:17:56.523 "driver_specific": {} 00:17:56.523 } 00:17:56.523 ] 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.523 18:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.782 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.782 "name": "Existed_Raid", 00:17:56.782 "uuid": "7edf3dbc-42d8-11ef-9ade-d5fc5159efa5", 00:17:56.782 "strip_size_kb": 0, 00:17:56.782 "state": "configuring", 00:17:56.782 "raid_level": "raid1", 00:17:56.782 "superblock": true, 00:17:56.782 "num_base_bdevs": 2, 00:17:56.782 "num_base_bdevs_discovered": 1, 00:17:56.782 "num_base_bdevs_operational": 2, 00:17:56.782 "base_bdevs_list": [ 00:17:56.782 { 00:17:56.782 "name": "BaseBdev1", 00:17:56.782 "uuid": "7f020926-42d8-11ef-9ade-d5fc5159efa5", 00:17:56.782 "is_configured": true, 00:17:56.782 "data_offset": 256, 00:17:56.782 "data_size": 7936 00:17:56.782 }, 00:17:56.782 { 00:17:56.782 "name": "BaseBdev2", 00:17:56.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.782 "is_configured": false, 00:17:56.782 "data_offset": 0, 00:17:56.782 "data_size": 0 00:17:56.782 } 00:17:56.782 ] 00:17:56.782 }' 00:17:56.782 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.782 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.041 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:57.299 [2024-07-15 18:31:49.558465] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.299 [2024-07-15 18:31:49.558515] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3fce834500 name Existed_Raid, state configuring 00:17:57.299 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:57.558 [2024-07-15 18:31:49.798497] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.558 [2024-07-15 18:31:49.799347] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.558 [2024-07-15 18:31:49.799384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.558 18:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.817 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.817 "name": "Existed_Raid", 00:17:57.817 "uuid": "7fe6ef68-42d8-11ef-9ade-d5fc5159efa5", 00:17:57.817 "strip_size_kb": 0, 00:17:57.817 "state": "configuring", 00:17:57.817 "raid_level": "raid1", 00:17:57.817 "superblock": true, 00:17:57.817 "num_base_bdevs": 2, 00:17:57.817 "num_base_bdevs_discovered": 1, 00:17:57.817 "num_base_bdevs_operational": 2, 00:17:57.817 "base_bdevs_list": [ 00:17:57.817 { 00:17:57.817 "name": "BaseBdev1", 00:17:57.817 "uuid": "7f020926-42d8-11ef-9ade-d5fc5159efa5", 00:17:57.817 "is_configured": true, 00:17:57.817 "data_offset": 256, 00:17:57.817 "data_size": 7936 00:17:57.817 }, 00:17:57.817 { 00:17:57.817 "name": "BaseBdev2", 00:17:57.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.817 "is_configured": false, 00:17:57.817 "data_offset": 0, 00:17:57.817 "data_size": 0 00:17:57.817 } 00:17:57.817 ] 00:17:57.817 }' 00:17:57.817 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.817 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.076 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:58.336 [2024-07-15 18:31:50.622653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.336 [2024-07-15 18:31:50.622724] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd3fce834a00 00:17:58.336 [2024-07-15 18:31:50.622730] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.336 [2024-07-15 18:31:50.622750] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd3fce897e20 00:17:58.336 [2024-07-15 18:31:50.622782] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd3fce834a00 00:17:58.336 [2024-07-15 18:31:50.622786] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xd3fce834a00 00:17:58.336 [2024-07-15 18:31:50.622801] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.336 BaseBdev2 00:17:58.336 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:58.336 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:58.336 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:58.336 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:17:58.336 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:58.336 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:58.336 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.595 18:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:58.855 [ 00:17:58.855 { 00:17:58.855 "name": "BaseBdev2", 00:17:58.855 "aliases": [ 00:17:58.855 "8064adcd-42d8-11ef-9ade-d5fc5159efa5" 00:17:58.855 ], 00:17:58.855 "product_name": "Malloc disk", 00:17:58.855 "block_size": 4096, 00:17:58.855 "num_blocks": 8192, 00:17:58.855 "uuid": "8064adcd-42d8-11ef-9ade-d5fc5159efa5", 00:17:58.855 "md_size": 32, 00:17:58.855 "md_interleave": false, 00:17:58.855 "dif_type": 0, 00:17:58.855 "assigned_rate_limits": { 00:17:58.855 "rw_ios_per_sec": 0, 00:17:58.855 "rw_mbytes_per_sec": 0, 00:17:58.855 "r_mbytes_per_sec": 0, 00:17:58.855 "w_mbytes_per_sec": 0 00:17:58.855 }, 00:17:58.855 "claimed": true, 00:17:58.855 "claim_type": "exclusive_write", 00:17:58.855 "zoned": false, 00:17:58.855 "supported_io_types": { 00:17:58.855 "read": true, 00:17:58.855 "write": true, 00:17:58.855 "unmap": true, 00:17:58.855 "flush": true, 00:17:58.855 "reset": true, 00:17:58.855 "nvme_admin": false, 00:17:58.855 "nvme_io": false, 00:17:58.855 "nvme_io_md": false, 00:17:58.855 "write_zeroes": true, 00:17:58.855 "zcopy": true, 00:17:58.855 "get_zone_info": false, 00:17:58.855 "zone_management": false, 00:17:58.855 "zone_append": false, 00:17:58.855 "compare": false, 00:17:58.855 "compare_and_write": false, 00:17:58.855 "abort": true, 00:17:58.855 "seek_hole": false, 00:17:58.855 "seek_data": false, 00:17:58.855 "copy": true, 00:17:58.855 "nvme_iov_md": false 00:17:58.855 }, 00:17:58.855 "memory_domains": [ 00:17:58.855 { 00:17:58.855 "dma_device_id": "system", 00:17:58.855 "dma_device_type": 1 00:17:58.855 }, 00:17:58.855 { 00:17:58.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.855 "dma_device_type": 2 00:17:58.855 } 00:17:58.855 ], 00:17:58.855 "driver_specific": {} 00:17:58.855 } 00:17:58.855 ] 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.855 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.115 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.115 "name": "Existed_Raid", 00:17:59.115 "uuid": "7fe6ef68-42d8-11ef-9ade-d5fc5159efa5", 00:17:59.115 "strip_size_kb": 0, 00:17:59.115 "state": "online", 00:17:59.115 "raid_level": "raid1", 00:17:59.115 "superblock": true, 00:17:59.115 "num_base_bdevs": 2, 00:17:59.115 "num_base_bdevs_discovered": 2, 00:17:59.115 "num_base_bdevs_operational": 2, 00:17:59.115 "base_bdevs_list": [ 00:17:59.115 { 00:17:59.115 "name": "BaseBdev1", 00:17:59.115 "uuid": "7f020926-42d8-11ef-9ade-d5fc5159efa5", 00:17:59.115 "is_configured": true, 00:17:59.115 "data_offset": 256, 00:17:59.115 "data_size": 7936 00:17:59.115 }, 00:17:59.115 { 00:17:59.115 "name": "BaseBdev2", 00:17:59.115 "uuid": "8064adcd-42d8-11ef-9ade-d5fc5159efa5", 00:17:59.115 "is_configured": true, 00:17:59.115 "data_offset": 256, 00:17:59.115 "data_size": 7936 00:17:59.115 } 00:17:59.115 ] 00:17:59.115 }' 00:17:59.115 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.115 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.683 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:59.683 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:59.683 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:59.683 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:59.683 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:59.683 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:59.684 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:59.684 18:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:59.684 [2024-07-15 18:31:52.042762] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.684 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:59.684 "name": "Existed_Raid", 00:17:59.684 "aliases": [ 00:17:59.684 "7fe6ef68-42d8-11ef-9ade-d5fc5159efa5" 00:17:59.684 ], 00:17:59.684 "product_name": "Raid Volume", 00:17:59.684 "block_size": 4096, 00:17:59.684 "num_blocks": 7936, 00:17:59.684 "uuid": "7fe6ef68-42d8-11ef-9ade-d5fc5159efa5", 00:17:59.684 "md_size": 32, 00:17:59.684 "md_interleave": false, 00:17:59.684 "dif_type": 0, 00:17:59.684 "assigned_rate_limits": { 00:17:59.684 "rw_ios_per_sec": 0, 00:17:59.684 "rw_mbytes_per_sec": 0, 00:17:59.684 "r_mbytes_per_sec": 0, 00:17:59.684 "w_mbytes_per_sec": 0 00:17:59.684 }, 00:17:59.684 "claimed": false, 00:17:59.684 "zoned": false, 00:17:59.684 "supported_io_types": { 00:17:59.684 "read": true, 00:17:59.684 "write": true, 00:17:59.684 "unmap": false, 00:17:59.684 "flush": false, 00:17:59.684 "reset": true, 00:17:59.684 "nvme_admin": false, 00:17:59.684 "nvme_io": false, 00:17:59.684 "nvme_io_md": false, 00:17:59.684 "write_zeroes": true, 00:17:59.684 "zcopy": false, 00:17:59.684 "get_zone_info": false, 00:17:59.684 "zone_management": false, 00:17:59.684 "zone_append": false, 00:17:59.684 "compare": false, 00:17:59.684 "compare_and_write": false, 00:17:59.684 "abort": false, 00:17:59.684 "seek_hole": false, 00:17:59.684 "seek_data": false, 00:17:59.684 "copy": false, 00:17:59.684 "nvme_iov_md": false 00:17:59.684 }, 00:17:59.684 "memory_domains": [ 00:17:59.684 { 00:17:59.684 "dma_device_id": "system", 00:17:59.684 "dma_device_type": 1 00:17:59.684 }, 00:17:59.684 { 00:17:59.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.684 "dma_device_type": 2 00:17:59.684 }, 00:17:59.684 { 00:17:59.684 "dma_device_id": "system", 00:17:59.684 "dma_device_type": 1 00:17:59.684 }, 00:17:59.684 { 00:17:59.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.684 "dma_device_type": 2 00:17:59.684 } 00:17:59.684 ], 00:17:59.684 "driver_specific": { 00:17:59.684 "raid": { 00:17:59.684 "uuid": "7fe6ef68-42d8-11ef-9ade-d5fc5159efa5", 00:17:59.684 "strip_size_kb": 0, 00:17:59.684 "state": "online", 00:17:59.684 "raid_level": "raid1", 00:17:59.684 "superblock": true, 00:17:59.684 "num_base_bdevs": 2, 00:17:59.684 "num_base_bdevs_discovered": 2, 00:17:59.684 "num_base_bdevs_operational": 2, 00:17:59.684 "base_bdevs_list": [ 00:17:59.684 { 00:17:59.684 "name": "BaseBdev1", 00:17:59.684 "uuid": "7f020926-42d8-11ef-9ade-d5fc5159efa5", 00:17:59.684 "is_configured": true, 00:17:59.684 "data_offset": 256, 00:17:59.684 "data_size": 7936 00:17:59.684 }, 00:17:59.684 { 00:17:59.684 "name": "BaseBdev2", 00:17:59.684 "uuid": "8064adcd-42d8-11ef-9ade-d5fc5159efa5", 00:17:59.684 "is_configured": true, 00:17:59.684 "data_offset": 256, 00:17:59.684 "data_size": 7936 00:17:59.684 } 00:17:59.684 ] 00:17:59.684 } 00:17:59.684 } 00:17:59.684 }' 00:17:59.684 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.684 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:59.684 BaseBdev2' 00:17:59.684 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.684 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:59.684 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.943 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.943 "name": "BaseBdev1", 00:17:59.943 "aliases": [ 00:17:59.943 "7f020926-42d8-11ef-9ade-d5fc5159efa5" 00:17:59.943 ], 00:17:59.943 "product_name": "Malloc disk", 00:17:59.943 "block_size": 4096, 00:17:59.943 "num_blocks": 8192, 00:17:59.943 "uuid": "7f020926-42d8-11ef-9ade-d5fc5159efa5", 00:17:59.943 "md_size": 32, 00:17:59.943 "md_interleave": false, 00:17:59.943 "dif_type": 0, 00:17:59.943 "assigned_rate_limits": { 00:17:59.943 "rw_ios_per_sec": 0, 00:17:59.943 "rw_mbytes_per_sec": 0, 00:17:59.943 "r_mbytes_per_sec": 0, 00:17:59.943 "w_mbytes_per_sec": 0 00:17:59.943 }, 00:17:59.943 "claimed": true, 00:17:59.943 "claim_type": "exclusive_write", 00:17:59.943 "zoned": false, 00:17:59.943 "supported_io_types": { 00:17:59.943 "read": true, 00:17:59.943 "write": true, 00:17:59.943 "unmap": true, 00:17:59.943 "flush": true, 00:17:59.943 "reset": true, 00:17:59.943 "nvme_admin": false, 00:17:59.943 "nvme_io": false, 00:17:59.943 "nvme_io_md": false, 00:17:59.943 "write_zeroes": true, 00:17:59.943 "zcopy": true, 00:17:59.943 "get_zone_info": false, 00:17:59.943 "zone_management": false, 00:17:59.943 "zone_append": false, 00:17:59.943 "compare": false, 00:17:59.943 "compare_and_write": false, 00:17:59.943 "abort": true, 00:17:59.943 "seek_hole": false, 00:17:59.943 "seek_data": false, 00:17:59.943 "copy": true, 00:17:59.943 "nvme_iov_md": false 00:17:59.943 }, 00:17:59.943 "memory_domains": [ 00:17:59.943 { 00:17:59.943 "dma_device_id": "system", 00:17:59.943 "dma_device_type": 1 00:17:59.943 }, 00:17:59.943 { 00:17:59.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.943 "dma_device_type": 2 00:17:59.943 } 00:17:59.943 ], 00:17:59.943 "driver_specific": {} 00:17:59.943 }' 00:17:59.943 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.943 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.943 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:59.943 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:00.202 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.461 "name": "BaseBdev2", 00:18:00.461 "aliases": [ 00:18:00.461 "8064adcd-42d8-11ef-9ade-d5fc5159efa5" 00:18:00.461 ], 00:18:00.461 "product_name": "Malloc disk", 00:18:00.461 "block_size": 4096, 00:18:00.461 "num_blocks": 8192, 00:18:00.461 "uuid": "8064adcd-42d8-11ef-9ade-d5fc5159efa5", 00:18:00.461 "md_size": 32, 00:18:00.461 "md_interleave": false, 00:18:00.461 "dif_type": 0, 00:18:00.461 "assigned_rate_limits": { 00:18:00.461 "rw_ios_per_sec": 0, 00:18:00.461 "rw_mbytes_per_sec": 0, 00:18:00.461 "r_mbytes_per_sec": 0, 00:18:00.461 "w_mbytes_per_sec": 0 00:18:00.461 }, 00:18:00.461 "claimed": true, 00:18:00.461 "claim_type": "exclusive_write", 00:18:00.461 "zoned": false, 00:18:00.461 "supported_io_types": { 00:18:00.461 "read": true, 00:18:00.461 "write": true, 00:18:00.461 "unmap": true, 00:18:00.461 "flush": true, 00:18:00.461 "reset": true, 00:18:00.461 "nvme_admin": false, 00:18:00.461 "nvme_io": false, 00:18:00.461 "nvme_io_md": false, 00:18:00.461 "write_zeroes": true, 00:18:00.461 "zcopy": true, 00:18:00.461 "get_zone_info": false, 00:18:00.461 "zone_management": false, 00:18:00.461 "zone_append": false, 00:18:00.461 "compare": false, 00:18:00.461 "compare_and_write": false, 00:18:00.461 "abort": true, 00:18:00.461 "seek_hole": false, 00:18:00.461 "seek_data": false, 00:18:00.461 "copy": true, 00:18:00.461 "nvme_iov_md": false 00:18:00.461 }, 00:18:00.461 "memory_domains": [ 00:18:00.461 { 00:18:00.461 "dma_device_id": "system", 00:18:00.461 "dma_device_type": 1 00:18:00.461 }, 00:18:00.461 { 00:18:00.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.461 "dma_device_type": 2 00:18:00.461 } 00:18:00.461 ], 00:18:00.461 "driver_specific": {} 00:18:00.461 }' 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:00.461 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:00.720 [2024-07-15 18:31:52.938821] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.720 18:31:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.979 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.979 "name": "Existed_Raid", 00:18:00.979 "uuid": "7fe6ef68-42d8-11ef-9ade-d5fc5159efa5", 00:18:00.979 "strip_size_kb": 0, 00:18:00.979 "state": "online", 00:18:00.979 "raid_level": "raid1", 00:18:00.979 "superblock": true, 00:18:00.979 "num_base_bdevs": 2, 00:18:00.979 "num_base_bdevs_discovered": 1, 00:18:00.979 "num_base_bdevs_operational": 1, 00:18:00.979 "base_bdevs_list": [ 00:18:00.979 { 00:18:00.979 "name": null, 00:18:00.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.979 "is_configured": false, 00:18:00.979 "data_offset": 256, 00:18:00.979 "data_size": 7936 00:18:00.979 }, 00:18:00.979 { 00:18:00.979 "name": "BaseBdev2", 00:18:00.979 "uuid": "8064adcd-42d8-11ef-9ade-d5fc5159efa5", 00:18:00.979 "is_configured": true, 00:18:00.979 "data_offset": 256, 00:18:00.979 "data_size": 7936 00:18:00.979 } 00:18:00.979 ] 00:18:00.979 }' 00:18:00.979 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.979 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.238 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:01.238 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:01.238 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.238 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:01.496 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:01.496 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.496 18:31:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:01.755 [2024-07-15 18:31:54.100857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.755 [2024-07-15 18:31:54.100929] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.755 [2024-07-15 18:31:54.106889] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.755 [2024-07-15 18:31:54.106911] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.755 [2024-07-15 18:31:54.106916] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3fce834a00 name Existed_Raid, state offline 00:18:01.755 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:01.755 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:01.755 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:01.755 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 66267 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66267 ']' 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 66267 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66267 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:02.014 killing process with pid 66267 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66267' 00:18:02.014 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 66267 00:18:02.334 [2024-07-15 18:31:54.414632] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.334 [2024-07-15 18:31:54.414683] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.334 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 66267 00:18:02.334 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:18:02.334 00:18:02.334 real 0m8.968s 00:18:02.334 user 0m15.462s 00:18:02.335 sys 0m1.685s 00:18:02.335 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.335 ************************************ 00:18:02.335 END TEST raid_state_function_test_sb_md_separate 00:18:02.335 ************************************ 00:18:02.335 18:31:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.335 18:31:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:02.335 18:31:54 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:02.335 18:31:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:02.335 18:31:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.335 18:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.335 ************************************ 00:18:02.335 START TEST raid_superblock_test_md_separate 00:18:02.335 ************************************ 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=66541 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 66541 /var/tmp/spdk-raid.sock 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66541 ']' 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.335 18:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.335 [2024-07-15 18:31:54.672854] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:18:02.335 [2024-07-15 18:31:54.673109] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:02.898 EAL: TSC is not safe to use in SMP mode 00:18:02.898 EAL: TSC is not invariant 00:18:02.898 [2024-07-15 18:31:55.268360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.155 [2024-07-15 18:31:55.378284] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:03.155 [2024-07-15 18:31:55.380380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.155 [2024-07-15 18:31:55.381160] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.155 [2024-07-15 18:31:55.381176] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:03.413 18:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:03.670 malloc1 00:18:03.670 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.926 [2024-07-15 18:31:56.293412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.926 [2024-07-15 18:31:56.293487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.926 [2024-07-15 18:31:56.293500] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213227434780 00:18:03.926 [2024-07-15 18:31:56.293517] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.926 [2024-07-15 18:31:56.294364] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.926 [2024-07-15 18:31:56.294391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.926 pt1 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:03.926 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:04.212 malloc2 00:18:04.212 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.469 [2024-07-15 18:31:56.793444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.469 [2024-07-15 18:31:56.793517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.469 [2024-07-15 18:31:56.793571] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213227434c80 00:18:04.469 [2024-07-15 18:31:56.793579] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.469 [2024-07-15 18:31:56.794220] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.469 [2024-07-15 18:31:56.794249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.469 pt2 00:18:04.469 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:04.469 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:04.469 18:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:04.726 [2024-07-15 18:31:57.041478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.726 [2024-07-15 18:31:57.042094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.726 [2024-07-15 18:31:57.042161] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x213227434f00 00:18:04.726 [2024-07-15 18:31:57.042167] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:04.726 [2024-07-15 18:31:57.042204] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x213227497e20 00:18:04.726 [2024-07-15 18:31:57.042236] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x213227434f00 00:18:04.726 [2024-07-15 18:31:57.042240] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x213227434f00 00:18:04.726 [2024-07-15 18:31:57.042257] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.726 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.984 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.984 "name": "raid_bdev1", 00:18:04.984 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:04.984 "strip_size_kb": 0, 00:18:04.984 "state": "online", 00:18:04.984 "raid_level": "raid1", 00:18:04.984 "superblock": true, 00:18:04.984 "num_base_bdevs": 2, 00:18:04.984 "num_base_bdevs_discovered": 2, 00:18:04.984 "num_base_bdevs_operational": 2, 00:18:04.984 "base_bdevs_list": [ 00:18:04.984 { 00:18:04.984 "name": "pt1", 00:18:04.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.984 "is_configured": true, 00:18:04.984 "data_offset": 256, 00:18:04.984 "data_size": 7936 00:18:04.984 }, 00:18:04.984 { 00:18:04.984 "name": "pt2", 00:18:04.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.984 "is_configured": true, 00:18:04.984 "data_offset": 256, 00:18:04.984 "data_size": 7936 00:18:04.984 } 00:18:04.984 ] 00:18:04.984 }' 00:18:04.984 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.984 18:31:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.240 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:05.240 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:05.240 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:05.240 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:05.240 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:05.240 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:18:05.240 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:05.240 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:05.496 [2024-07-15 18:31:57.889554] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.754 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:05.754 "name": "raid_bdev1", 00:18:05.754 "aliases": [ 00:18:05.754 "84381fc1-42d8-11ef-9ade-d5fc5159efa5" 00:18:05.754 ], 00:18:05.754 "product_name": "Raid Volume", 00:18:05.754 "block_size": 4096, 00:18:05.754 "num_blocks": 7936, 00:18:05.754 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:05.754 "md_size": 32, 00:18:05.754 "md_interleave": false, 00:18:05.754 "dif_type": 0, 00:18:05.754 "assigned_rate_limits": { 00:18:05.754 "rw_ios_per_sec": 0, 00:18:05.754 "rw_mbytes_per_sec": 0, 00:18:05.754 "r_mbytes_per_sec": 0, 00:18:05.754 "w_mbytes_per_sec": 0 00:18:05.754 }, 00:18:05.754 "claimed": false, 00:18:05.754 "zoned": false, 00:18:05.754 "supported_io_types": { 00:18:05.754 "read": true, 00:18:05.754 "write": true, 00:18:05.754 "unmap": false, 00:18:05.754 "flush": false, 00:18:05.754 "reset": true, 00:18:05.754 "nvme_admin": false, 00:18:05.754 "nvme_io": false, 00:18:05.754 "nvme_io_md": false, 00:18:05.754 "write_zeroes": true, 00:18:05.754 "zcopy": false, 00:18:05.754 "get_zone_info": false, 00:18:05.754 "zone_management": false, 00:18:05.754 "zone_append": false, 00:18:05.754 "compare": false, 00:18:05.754 "compare_and_write": false, 00:18:05.754 "abort": false, 00:18:05.754 "seek_hole": false, 00:18:05.754 "seek_data": false, 00:18:05.754 "copy": false, 00:18:05.754 "nvme_iov_md": false 00:18:05.754 }, 00:18:05.754 "memory_domains": [ 00:18:05.754 { 00:18:05.754 "dma_device_id": "system", 00:18:05.754 "dma_device_type": 1 00:18:05.754 }, 00:18:05.754 { 00:18:05.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.754 "dma_device_type": 2 00:18:05.754 }, 00:18:05.754 { 00:18:05.754 "dma_device_id": "system", 00:18:05.754 "dma_device_type": 1 00:18:05.754 }, 00:18:05.754 { 00:18:05.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.754 "dma_device_type": 2 00:18:05.754 } 00:18:05.754 ], 00:18:05.754 "driver_specific": { 00:18:05.754 "raid": { 00:18:05.754 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:05.754 "strip_size_kb": 0, 00:18:05.754 "state": "online", 00:18:05.754 "raid_level": "raid1", 00:18:05.754 "superblock": true, 00:18:05.754 "num_base_bdevs": 2, 00:18:05.754 "num_base_bdevs_discovered": 2, 00:18:05.754 "num_base_bdevs_operational": 2, 00:18:05.754 "base_bdevs_list": [ 00:18:05.754 { 00:18:05.754 "name": "pt1", 00:18:05.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.754 "is_configured": true, 00:18:05.754 "data_offset": 256, 00:18:05.754 "data_size": 7936 00:18:05.754 }, 00:18:05.754 { 00:18:05.754 "name": "pt2", 00:18:05.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.754 "is_configured": true, 00:18:05.754 "data_offset": 256, 00:18:05.754 "data_size": 7936 00:18:05.754 } 00:18:05.754 ] 00:18:05.754 } 00:18:05.754 } 00:18:05.754 }' 00:18:05.754 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.754 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:05.754 pt2' 00:18:05.754 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.754 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:05.754 18:31:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.754 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.754 "name": "pt1", 00:18:05.754 "aliases": [ 00:18:05.754 "00000000-0000-0000-0000-000000000001" 00:18:05.754 ], 00:18:05.754 "product_name": "passthru", 00:18:05.754 "block_size": 4096, 00:18:05.754 "num_blocks": 8192, 00:18:05.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.754 "md_size": 32, 00:18:05.754 "md_interleave": false, 00:18:05.754 "dif_type": 0, 00:18:05.754 "assigned_rate_limits": { 00:18:05.754 "rw_ios_per_sec": 0, 00:18:05.754 "rw_mbytes_per_sec": 0, 00:18:05.754 "r_mbytes_per_sec": 0, 00:18:05.754 "w_mbytes_per_sec": 0 00:18:05.754 }, 00:18:05.754 "claimed": true, 00:18:05.754 "claim_type": "exclusive_write", 00:18:05.754 "zoned": false, 00:18:05.754 "supported_io_types": { 00:18:05.754 "read": true, 00:18:05.754 "write": true, 00:18:05.754 "unmap": true, 00:18:05.754 "flush": true, 00:18:05.754 "reset": true, 00:18:05.754 "nvme_admin": false, 00:18:05.754 "nvme_io": false, 00:18:05.754 "nvme_io_md": false, 00:18:05.754 "write_zeroes": true, 00:18:05.754 "zcopy": true, 00:18:05.754 "get_zone_info": false, 00:18:05.754 "zone_management": false, 00:18:05.754 "zone_append": false, 00:18:05.754 "compare": false, 00:18:05.754 "compare_and_write": false, 00:18:05.754 "abort": true, 00:18:05.754 "seek_hole": false, 00:18:05.754 "seek_data": false, 00:18:05.754 "copy": true, 00:18:05.754 "nvme_iov_md": false 00:18:05.754 }, 00:18:05.754 "memory_domains": [ 00:18:05.754 { 00:18:05.754 "dma_device_id": "system", 00:18:05.754 "dma_device_type": 1 00:18:05.754 }, 00:18:05.754 { 00:18:05.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.754 "dma_device_type": 2 00:18:05.754 } 00:18:05.754 ], 00:18:05.754 "driver_specific": { 00:18:05.754 "passthru": { 00:18:05.754 "name": "pt1", 00:18:05.754 "base_bdev_name": "malloc1" 00:18:05.754 } 00:18:05.754 } 00:18:05.754 }' 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:06.012 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.303 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.303 "name": "pt2", 00:18:06.303 "aliases": [ 00:18:06.303 "00000000-0000-0000-0000-000000000002" 00:18:06.303 ], 00:18:06.303 "product_name": "passthru", 00:18:06.303 "block_size": 4096, 00:18:06.303 "num_blocks": 8192, 00:18:06.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.303 "md_size": 32, 00:18:06.303 "md_interleave": false, 00:18:06.303 "dif_type": 0, 00:18:06.303 "assigned_rate_limits": { 00:18:06.303 "rw_ios_per_sec": 0, 00:18:06.303 "rw_mbytes_per_sec": 0, 00:18:06.303 "r_mbytes_per_sec": 0, 00:18:06.303 "w_mbytes_per_sec": 0 00:18:06.303 }, 00:18:06.303 "claimed": true, 00:18:06.303 "claim_type": "exclusive_write", 00:18:06.303 "zoned": false, 00:18:06.303 "supported_io_types": { 00:18:06.303 "read": true, 00:18:06.303 "write": true, 00:18:06.303 "unmap": true, 00:18:06.303 "flush": true, 00:18:06.303 "reset": true, 00:18:06.303 "nvme_admin": false, 00:18:06.303 "nvme_io": false, 00:18:06.303 "nvme_io_md": false, 00:18:06.304 "write_zeroes": true, 00:18:06.304 "zcopy": true, 00:18:06.304 "get_zone_info": false, 00:18:06.304 "zone_management": false, 00:18:06.304 "zone_append": false, 00:18:06.304 "compare": false, 00:18:06.304 "compare_and_write": false, 00:18:06.304 "abort": true, 00:18:06.304 "seek_hole": false, 00:18:06.304 "seek_data": false, 00:18:06.304 "copy": true, 00:18:06.304 "nvme_iov_md": false 00:18:06.304 }, 00:18:06.304 "memory_domains": [ 00:18:06.304 { 00:18:06.304 "dma_device_id": "system", 00:18:06.304 "dma_device_type": 1 00:18:06.304 }, 00:18:06.304 { 00:18:06.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.304 "dma_device_type": 2 00:18:06.304 } 00:18:06.304 ], 00:18:06.304 "driver_specific": { 00:18:06.304 "passthru": { 00:18:06.304 "name": "pt2", 00:18:06.304 "base_bdev_name": "malloc2" 00:18:06.304 } 00:18:06.304 } 00:18:06.304 }' 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:06.304 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:06.562 [2024-07-15 18:31:58.777618] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.562 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=84381fc1-42d8-11ef-9ade-d5fc5159efa5 00:18:06.562 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 84381fc1-42d8-11ef-9ade-d5fc5159efa5 ']' 00:18:06.562 18:31:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:06.819 [2024-07-15 18:31:59.025594] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.819 [2024-07-15 18:31:59.025618] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.819 [2024-07-15 18:31:59.025642] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.819 [2024-07-15 18:31:59.025663] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.819 [2024-07-15 18:31:59.025669] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213227434f00 name raid_bdev1, state offline 00:18:06.819 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.819 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:07.077 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:07.077 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:07.077 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.077 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:07.334 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.334 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:07.592 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:07.592 18:31:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:07.850 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:08.108 [2024-07-15 18:32:00.257705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:08.108 [2024-07-15 18:32:00.258305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:08.108 [2024-07-15 18:32:00.258333] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:08.108 [2024-07-15 18:32:00.258370] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:08.108 [2024-07-15 18:32:00.258381] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.108 [2024-07-15 18:32:00.258386] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213227434c80 name raid_bdev1, state configuring 00:18:08.108 request: 00:18:08.108 { 00:18:08.108 "name": "raid_bdev1", 00:18:08.108 "raid_level": "raid1", 00:18:08.108 "base_bdevs": [ 00:18:08.108 "malloc1", 00:18:08.108 "malloc2" 00:18:08.108 ], 00:18:08.108 "superblock": false, 00:18:08.108 "method": "bdev_raid_create", 00:18:08.108 "req_id": 1 00:18:08.108 } 00:18:08.108 Got JSON-RPC error response 00:18:08.108 response: 00:18:08.108 { 00:18:08.108 "code": -17, 00:18:08.108 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:08.108 } 00:18:08.108 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:18:08.108 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.108 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.108 18:32:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.108 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.108 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:08.366 [2024-07-15 18:32:00.745726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:08.366 [2024-07-15 18:32:00.745836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.366 [2024-07-15 18:32:00.745849] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213227434780 00:18:08.366 [2024-07-15 18:32:00.745857] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.366 [2024-07-15 18:32:00.746486] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.366 [2024-07-15 18:32:00.746511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:08.366 [2024-07-15 18:32:00.746537] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:08.366 [2024-07-15 18:32:00.746550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:08.366 pt1 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:08.366 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:08.367 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.367 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.367 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.367 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.367 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.367 18:32:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.625 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.625 "name": "raid_bdev1", 00:18:08.625 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:08.625 "strip_size_kb": 0, 00:18:08.625 "state": "configuring", 00:18:08.625 "raid_level": "raid1", 00:18:08.625 "superblock": true, 00:18:08.625 "num_base_bdevs": 2, 00:18:08.625 "num_base_bdevs_discovered": 1, 00:18:08.625 "num_base_bdevs_operational": 2, 00:18:08.625 "base_bdevs_list": [ 00:18:08.625 { 00:18:08.625 "name": "pt1", 00:18:08.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.625 "is_configured": true, 00:18:08.625 "data_offset": 256, 00:18:08.625 "data_size": 7936 00:18:08.625 }, 00:18:08.625 { 00:18:08.625 "name": null, 00:18:08.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.625 "is_configured": false, 00:18:08.625 "data_offset": 256, 00:18:08.625 "data_size": 7936 00:18:08.625 } 00:18:08.625 ] 00:18:08.625 }' 00:18:08.625 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.625 18:32:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.191 [2024-07-15 18:32:01.525805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.191 [2024-07-15 18:32:01.525879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.191 [2024-07-15 18:32:01.525892] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213227434f00 00:18:09.191 [2024-07-15 18:32:01.525901] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.191 [2024-07-15 18:32:01.525974] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.191 [2024-07-15 18:32:01.525985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.191 [2024-07-15 18:32:01.526009] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:09.191 [2024-07-15 18:32:01.526018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.191 [2024-07-15 18:32:01.526039] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x213227435180 00:18:09.191 [2024-07-15 18:32:01.526042] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.191 [2024-07-15 18:32:01.526062] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x213227497e20 00:18:09.191 [2024-07-15 18:32:01.526084] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x213227435180 00:18:09.191 [2024-07-15 18:32:01.526087] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x213227435180 00:18:09.191 [2024-07-15 18:32:01.526103] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.191 pt2 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.191 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.449 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.449 "name": "raid_bdev1", 00:18:09.449 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:09.449 "strip_size_kb": 0, 00:18:09.449 "state": "online", 00:18:09.449 "raid_level": "raid1", 00:18:09.449 "superblock": true, 00:18:09.449 "num_base_bdevs": 2, 00:18:09.449 "num_base_bdevs_discovered": 2, 00:18:09.449 "num_base_bdevs_operational": 2, 00:18:09.449 "base_bdevs_list": [ 00:18:09.449 { 00:18:09.449 "name": "pt1", 00:18:09.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.449 "is_configured": true, 00:18:09.449 "data_offset": 256, 00:18:09.449 "data_size": 7936 00:18:09.449 }, 00:18:09.449 { 00:18:09.449 "name": "pt2", 00:18:09.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.449 "is_configured": true, 00:18:09.449 "data_offset": 256, 00:18:09.449 "data_size": 7936 00:18:09.449 } 00:18:09.449 ] 00:18:09.449 }' 00:18:09.449 18:32:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.449 18:32:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:10.015 [2024-07-15 18:32:02.329925] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:10.015 "name": "raid_bdev1", 00:18:10.015 "aliases": [ 00:18:10.015 "84381fc1-42d8-11ef-9ade-d5fc5159efa5" 00:18:10.015 ], 00:18:10.015 "product_name": "Raid Volume", 00:18:10.015 "block_size": 4096, 00:18:10.015 "num_blocks": 7936, 00:18:10.015 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:10.015 "md_size": 32, 00:18:10.015 "md_interleave": false, 00:18:10.015 "dif_type": 0, 00:18:10.015 "assigned_rate_limits": { 00:18:10.015 "rw_ios_per_sec": 0, 00:18:10.015 "rw_mbytes_per_sec": 0, 00:18:10.015 "r_mbytes_per_sec": 0, 00:18:10.015 "w_mbytes_per_sec": 0 00:18:10.015 }, 00:18:10.015 "claimed": false, 00:18:10.015 "zoned": false, 00:18:10.015 "supported_io_types": { 00:18:10.015 "read": true, 00:18:10.015 "write": true, 00:18:10.015 "unmap": false, 00:18:10.015 "flush": false, 00:18:10.015 "reset": true, 00:18:10.015 "nvme_admin": false, 00:18:10.015 "nvme_io": false, 00:18:10.015 "nvme_io_md": false, 00:18:10.015 "write_zeroes": true, 00:18:10.015 "zcopy": false, 00:18:10.015 "get_zone_info": false, 00:18:10.015 "zone_management": false, 00:18:10.015 "zone_append": false, 00:18:10.015 "compare": false, 00:18:10.015 "compare_and_write": false, 00:18:10.015 "abort": false, 00:18:10.015 "seek_hole": false, 00:18:10.015 "seek_data": false, 00:18:10.015 "copy": false, 00:18:10.015 "nvme_iov_md": false 00:18:10.015 }, 00:18:10.015 "memory_domains": [ 00:18:10.015 { 00:18:10.015 "dma_device_id": "system", 00:18:10.015 "dma_device_type": 1 00:18:10.015 }, 00:18:10.015 { 00:18:10.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.015 "dma_device_type": 2 00:18:10.015 }, 00:18:10.015 { 00:18:10.015 "dma_device_id": "system", 00:18:10.015 "dma_device_type": 1 00:18:10.015 }, 00:18:10.015 { 00:18:10.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.015 "dma_device_type": 2 00:18:10.015 } 00:18:10.015 ], 00:18:10.015 "driver_specific": { 00:18:10.015 "raid": { 00:18:10.015 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:10.015 "strip_size_kb": 0, 00:18:10.015 "state": "online", 00:18:10.015 "raid_level": "raid1", 00:18:10.015 "superblock": true, 00:18:10.015 "num_base_bdevs": 2, 00:18:10.015 "num_base_bdevs_discovered": 2, 00:18:10.015 "num_base_bdevs_operational": 2, 00:18:10.015 "base_bdevs_list": [ 00:18:10.015 { 00:18:10.015 "name": "pt1", 00:18:10.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.015 "is_configured": true, 00:18:10.015 "data_offset": 256, 00:18:10.015 "data_size": 7936 00:18:10.015 }, 00:18:10.015 { 00:18:10.015 "name": "pt2", 00:18:10.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.015 "is_configured": true, 00:18:10.015 "data_offset": 256, 00:18:10.015 "data_size": 7936 00:18:10.015 } 00:18:10.015 ] 00:18:10.015 } 00:18:10.015 } 00:18:10.015 }' 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:10.015 pt2' 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:10.015 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.274 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.274 "name": "pt1", 00:18:10.274 "aliases": [ 00:18:10.274 "00000000-0000-0000-0000-000000000001" 00:18:10.274 ], 00:18:10.274 "product_name": "passthru", 00:18:10.274 "block_size": 4096, 00:18:10.274 "num_blocks": 8192, 00:18:10.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.274 "md_size": 32, 00:18:10.274 "md_interleave": false, 00:18:10.274 "dif_type": 0, 00:18:10.274 "assigned_rate_limits": { 00:18:10.274 "rw_ios_per_sec": 0, 00:18:10.274 "rw_mbytes_per_sec": 0, 00:18:10.274 "r_mbytes_per_sec": 0, 00:18:10.274 "w_mbytes_per_sec": 0 00:18:10.274 }, 00:18:10.274 "claimed": true, 00:18:10.274 "claim_type": "exclusive_write", 00:18:10.274 "zoned": false, 00:18:10.274 "supported_io_types": { 00:18:10.274 "read": true, 00:18:10.274 "write": true, 00:18:10.274 "unmap": true, 00:18:10.274 "flush": true, 00:18:10.274 "reset": true, 00:18:10.274 "nvme_admin": false, 00:18:10.274 "nvme_io": false, 00:18:10.274 "nvme_io_md": false, 00:18:10.274 "write_zeroes": true, 00:18:10.274 "zcopy": true, 00:18:10.274 "get_zone_info": false, 00:18:10.274 "zone_management": false, 00:18:10.274 "zone_append": false, 00:18:10.274 "compare": false, 00:18:10.274 "compare_and_write": false, 00:18:10.274 "abort": true, 00:18:10.274 "seek_hole": false, 00:18:10.274 "seek_data": false, 00:18:10.274 "copy": true, 00:18:10.274 "nvme_iov_md": false 00:18:10.274 }, 00:18:10.274 "memory_domains": [ 00:18:10.274 { 00:18:10.274 "dma_device_id": "system", 00:18:10.274 "dma_device_type": 1 00:18:10.274 }, 00:18:10.274 { 00:18:10.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.274 "dma_device_type": 2 00:18:10.274 } 00:18:10.274 ], 00:18:10.274 "driver_specific": { 00:18:10.274 "passthru": { 00:18:10.274 "name": "pt1", 00:18:10.274 "base_bdev_name": "malloc1" 00:18:10.274 } 00:18:10.274 } 00:18:10.274 }' 00:18:10.274 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.274 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.274 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:10.274 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.274 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:10.591 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.591 "name": "pt2", 00:18:10.591 "aliases": [ 00:18:10.591 "00000000-0000-0000-0000-000000000002" 00:18:10.591 ], 00:18:10.591 "product_name": "passthru", 00:18:10.591 "block_size": 4096, 00:18:10.591 "num_blocks": 8192, 00:18:10.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.591 "md_size": 32, 00:18:10.591 "md_interleave": false, 00:18:10.591 "dif_type": 0, 00:18:10.591 "assigned_rate_limits": { 00:18:10.591 "rw_ios_per_sec": 0, 00:18:10.591 "rw_mbytes_per_sec": 0, 00:18:10.591 "r_mbytes_per_sec": 0, 00:18:10.591 "w_mbytes_per_sec": 0 00:18:10.591 }, 00:18:10.591 "claimed": true, 00:18:10.591 "claim_type": "exclusive_write", 00:18:10.591 "zoned": false, 00:18:10.591 "supported_io_types": { 00:18:10.591 "read": true, 00:18:10.591 "write": true, 00:18:10.591 "unmap": true, 00:18:10.591 "flush": true, 00:18:10.591 "reset": true, 00:18:10.591 "nvme_admin": false, 00:18:10.591 "nvme_io": false, 00:18:10.591 "nvme_io_md": false, 00:18:10.592 "write_zeroes": true, 00:18:10.592 "zcopy": true, 00:18:10.592 "get_zone_info": false, 00:18:10.592 "zone_management": false, 00:18:10.592 "zone_append": false, 00:18:10.592 "compare": false, 00:18:10.592 "compare_and_write": false, 00:18:10.592 "abort": true, 00:18:10.592 "seek_hole": false, 00:18:10.592 "seek_data": false, 00:18:10.592 "copy": true, 00:18:10.592 "nvme_iov_md": false 00:18:10.592 }, 00:18:10.592 "memory_domains": [ 00:18:10.592 { 00:18:10.592 "dma_device_id": "system", 00:18:10.592 "dma_device_type": 1 00:18:10.592 }, 00:18:10.592 { 00:18:10.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.592 "dma_device_type": 2 00:18:10.592 } 00:18:10.592 ], 00:18:10.592 "driver_specific": { 00:18:10.592 "passthru": { 00:18:10.592 "name": "pt2", 00:18:10.592 "base_bdev_name": "malloc2" 00:18:10.592 } 00:18:10.592 } 00:18:10.592 }' 00:18:10.592 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.592 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.592 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:10.851 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.851 18:32:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:10.851 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:11.108 [2024-07-15 18:32:03.253979] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.108 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 84381fc1-42d8-11ef-9ade-d5fc5159efa5 '!=' 84381fc1-42d8-11ef-9ade-d5fc5159efa5 ']' 00:18:11.108 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:11.108 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:11.108 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:18:11.108 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:11.366 [2024-07-15 18:32:03.549977] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.366 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.624 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.624 "name": "raid_bdev1", 00:18:11.624 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:11.624 "strip_size_kb": 0, 00:18:11.624 "state": "online", 00:18:11.624 "raid_level": "raid1", 00:18:11.624 "superblock": true, 00:18:11.624 "num_base_bdevs": 2, 00:18:11.624 "num_base_bdevs_discovered": 1, 00:18:11.624 "num_base_bdevs_operational": 1, 00:18:11.624 "base_bdevs_list": [ 00:18:11.624 { 00:18:11.624 "name": null, 00:18:11.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.624 "is_configured": false, 00:18:11.624 "data_offset": 256, 00:18:11.624 "data_size": 7936 00:18:11.624 }, 00:18:11.624 { 00:18:11.624 "name": "pt2", 00:18:11.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.624 "is_configured": true, 00:18:11.624 "data_offset": 256, 00:18:11.624 "data_size": 7936 00:18:11.624 } 00:18:11.624 ] 00:18:11.624 }' 00:18:11.624 18:32:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.624 18:32:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.883 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:12.141 [2024-07-15 18:32:04.438049] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.141 [2024-07-15 18:32:04.438076] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.141 [2024-07-15 18:32:04.438100] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.141 [2024-07-15 18:32:04.438113] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.141 [2024-07-15 18:32:04.438117] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213227435180 name raid_bdev1, state offline 00:18:12.141 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.141 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:12.399 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:12.399 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:12.399 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:12.399 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:12.399 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:12.657 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:12.657 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:12.657 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:12.657 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:12.657 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:18:12.657 18:32:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:13.223 [2024-07-15 18:32:05.318123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:13.223 [2024-07-15 18:32:05.318196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.223 [2024-07-15 18:32:05.318211] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213227434f00 00:18:13.223 [2024-07-15 18:32:05.318220] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.223 [2024-07-15 18:32:05.318884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.223 [2024-07-15 18:32:05.318919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:13.223 [2024-07-15 18:32:05.318958] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:13.223 [2024-07-15 18:32:05.318973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.224 [2024-07-15 18:32:05.318989] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x213227435180 00:18:13.224 [2024-07-15 18:32:05.318993] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.224 [2024-07-15 18:32:05.319014] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x213227497e20 00:18:13.224 [2024-07-15 18:32:05.319041] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x213227435180 00:18:13.224 [2024-07-15 18:32:05.319044] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x213227435180 00:18:13.224 [2024-07-15 18:32:05.319067] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.224 pt2 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.224 "name": "raid_bdev1", 00:18:13.224 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:13.224 "strip_size_kb": 0, 00:18:13.224 "state": "online", 00:18:13.224 "raid_level": "raid1", 00:18:13.224 "superblock": true, 00:18:13.224 "num_base_bdevs": 2, 00:18:13.224 "num_base_bdevs_discovered": 1, 00:18:13.224 "num_base_bdevs_operational": 1, 00:18:13.224 "base_bdevs_list": [ 00:18:13.224 { 00:18:13.224 "name": null, 00:18:13.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.224 "is_configured": false, 00:18:13.224 "data_offset": 256, 00:18:13.224 "data_size": 7936 00:18:13.224 }, 00:18:13.224 { 00:18:13.224 "name": "pt2", 00:18:13.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.224 "is_configured": true, 00:18:13.224 "data_offset": 256, 00:18:13.224 "data_size": 7936 00:18:13.224 } 00:18:13.224 ] 00:18:13.224 }' 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.224 18:32:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.812 18:32:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:13.812 [2024-07-15 18:32:06.206198] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.812 [2024-07-15 18:32:06.206225] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.812 [2024-07-15 18:32:06.206264] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.812 [2024-07-15 18:32:06.206277] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.812 [2024-07-15 18:32:06.206281] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213227435180 name raid_bdev1, state offline 00:18:14.070 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.070 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:14.329 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:14.329 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:14.329 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:14.329 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:14.588 [2024-07-15 18:32:06.730245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:14.588 [2024-07-15 18:32:06.730305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.588 [2024-07-15 18:32:06.730318] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213227434c80 00:18:14.588 [2024-07-15 18:32:06.730327] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.588 [2024-07-15 18:32:06.730964] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.588 [2024-07-15 18:32:06.730993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.588 [2024-07-15 18:32:06.731019] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:14.588 [2024-07-15 18:32:06.731031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.588 [2024-07-15 18:32:06.731052] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:14.588 [2024-07-15 18:32:06.731056] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.588 [2024-07-15 18:32:06.731061] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213227434780 name raid_bdev1, state configuring 00:18:14.588 [2024-07-15 18:32:06.731069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.588 [2024-07-15 18:32:06.731082] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x213227434780 00:18:14.588 [2024-07-15 18:32:06.731086] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:14.588 [2024-07-15 18:32:06.731107] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x213227497e20 00:18:14.588 [2024-07-15 18:32:06.731132] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x213227434780 00:18:14.588 [2024-07-15 18:32:06.731136] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x213227434780 00:18:14.588 [2024-07-15 18:32:06.731150] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.588 pt1 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.588 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.880 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.880 "name": "raid_bdev1", 00:18:14.880 "uuid": "84381fc1-42d8-11ef-9ade-d5fc5159efa5", 00:18:14.880 "strip_size_kb": 0, 00:18:14.880 "state": "online", 00:18:14.880 "raid_level": "raid1", 00:18:14.880 "superblock": true, 00:18:14.880 "num_base_bdevs": 2, 00:18:14.880 "num_base_bdevs_discovered": 1, 00:18:14.880 "num_base_bdevs_operational": 1, 00:18:14.880 "base_bdevs_list": [ 00:18:14.880 { 00:18:14.880 "name": null, 00:18:14.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.880 "is_configured": false, 00:18:14.880 "data_offset": 256, 00:18:14.880 "data_size": 7936 00:18:14.880 }, 00:18:14.880 { 00:18:14.880 "name": "pt2", 00:18:14.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.880 "is_configured": true, 00:18:14.880 "data_offset": 256, 00:18:14.880 "data_size": 7936 00:18:14.880 } 00:18:14.880 ] 00:18:14.880 }' 00:18:14.880 18:32:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.880 18:32:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 18:32:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:15.156 18:32:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:15.415 18:32:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:15.415 18:32:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:15.415 18:32:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:15.415 [2024-07-15 18:32:07.810366] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 84381fc1-42d8-11ef-9ade-d5fc5159efa5 '!=' 84381fc1-42d8-11ef-9ade-d5fc5159efa5 ']' 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 66541 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66541 ']' 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 66541 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66541 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:15.674 killing process with pid 66541 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66541' 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 66541 00:18:15.674 [2024-07-15 18:32:07.840107] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.674 [2024-07-15 18:32:07.840135] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.674 [2024-07-15 18:32:07.840148] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.674 [2024-07-15 18:32:07.840153] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213227434780 name raid_bdev1, state offline 00:18:15.674 18:32:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 66541 00:18:15.674 [2024-07-15 18:32:07.854444] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.934 18:32:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:18:15.934 00:18:15.934 real 0m13.416s 00:18:15.934 user 0m23.937s 00:18:15.934 sys 0m2.074s 00:18:15.934 ************************************ 00:18:15.934 END TEST raid_superblock_test_md_separate 00:18:15.934 ************************************ 00:18:15.934 18:32:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:15.934 18:32:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 18:32:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:15.934 18:32:08 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:18:15.934 18:32:08 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:18:15.934 18:32:08 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:15.934 18:32:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:15.934 18:32:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.934 18:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 ************************************ 00:18:15.934 START TEST raid_state_function_test_sb_md_interleaved 00:18:15.934 ************************************ 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=66928 00:18:15.934 Process raid pid: 66928 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66928' 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 66928 /var/tmp/spdk-raid.sock 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66928 ']' 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:15.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.934 18:32:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 [2024-07-15 18:32:08.134480] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:18:15.934 [2024-07-15 18:32:08.134684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:16.502 EAL: TSC is not safe to use in SMP mode 00:18:16.502 EAL: TSC is not invariant 00:18:16.502 [2024-07-15 18:32:08.737519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.502 [2024-07-15 18:32:08.855032] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:16.502 [2024-07-15 18:32:08.857525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.502 [2024-07-15 18:32:08.858538] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.502 [2024-07-15 18:32:08.858558] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.761 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.761 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:18:16.761 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:17.020 [2024-07-15 18:32:09.395622] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.020 [2024-07-15 18:32:09.395689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.020 [2024-07-15 18:32:09.395695] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.020 [2024-07-15 18:32:09.395721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.020 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.587 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.587 "name": "Existed_Raid", 00:18:17.587 "uuid": "8b953838-42d8-11ef-9ade-d5fc5159efa5", 00:18:17.587 "strip_size_kb": 0, 00:18:17.587 "state": "configuring", 00:18:17.587 "raid_level": "raid1", 00:18:17.587 "superblock": true, 00:18:17.587 "num_base_bdevs": 2, 00:18:17.587 "num_base_bdevs_discovered": 0, 00:18:17.587 "num_base_bdevs_operational": 2, 00:18:17.587 "base_bdevs_list": [ 00:18:17.587 { 00:18:17.587 "name": "BaseBdev1", 00:18:17.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.587 "is_configured": false, 00:18:17.587 "data_offset": 0, 00:18:17.587 "data_size": 0 00:18:17.587 }, 00:18:17.587 { 00:18:17.587 "name": "BaseBdev2", 00:18:17.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.587 "is_configured": false, 00:18:17.587 "data_offset": 0, 00:18:17.587 "data_size": 0 00:18:17.587 } 00:18:17.587 ] 00:18:17.587 }' 00:18:17.587 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.587 18:32:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.846 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:17.846 [2024-07-15 18:32:10.227665] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:17.846 [2024-07-15 18:32:10.227696] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1115ea434500 name Existed_Raid, state configuring 00:18:17.846 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:18.412 [2024-07-15 18:32:10.511711] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.413 [2024-07-15 18:32:10.511768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.413 [2024-07-15 18:32:10.511775] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.413 [2024-07-15 18:32:10.511784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.413 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:18.413 [2024-07-15 18:32:10.744691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.413 BaseBdev1 00:18:18.413 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:18.413 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:18.413 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:18.413 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:18:18.413 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:18.413 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:18.413 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.670 18:32:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:18.929 [ 00:18:18.929 { 00:18:18.929 "name": "BaseBdev1", 00:18:18.929 "aliases": [ 00:18:18.929 "8c62ed56-42d8-11ef-9ade-d5fc5159efa5" 00:18:18.929 ], 00:18:18.929 "product_name": "Malloc disk", 00:18:18.929 "block_size": 4128, 00:18:18.929 "num_blocks": 8192, 00:18:18.929 "uuid": "8c62ed56-42d8-11ef-9ade-d5fc5159efa5", 00:18:18.929 "md_size": 32, 00:18:18.929 "md_interleave": true, 00:18:18.929 "dif_type": 0, 00:18:18.929 "assigned_rate_limits": { 00:18:18.929 "rw_ios_per_sec": 0, 00:18:18.929 "rw_mbytes_per_sec": 0, 00:18:18.929 "r_mbytes_per_sec": 0, 00:18:18.929 "w_mbytes_per_sec": 0 00:18:18.929 }, 00:18:18.929 "claimed": true, 00:18:18.929 "claim_type": "exclusive_write", 00:18:18.929 "zoned": false, 00:18:18.929 "supported_io_types": { 00:18:18.929 "read": true, 00:18:18.929 "write": true, 00:18:18.929 "unmap": true, 00:18:18.929 "flush": true, 00:18:18.929 "reset": true, 00:18:18.929 "nvme_admin": false, 00:18:18.929 "nvme_io": false, 00:18:18.929 "nvme_io_md": false, 00:18:18.929 "write_zeroes": true, 00:18:18.929 "zcopy": true, 00:18:18.929 "get_zone_info": false, 00:18:18.929 "zone_management": false, 00:18:18.929 "zone_append": false, 00:18:18.929 "compare": false, 00:18:18.929 "compare_and_write": false, 00:18:18.929 "abort": true, 00:18:18.929 "seek_hole": false, 00:18:18.929 "seek_data": false, 00:18:18.929 "copy": true, 00:18:18.929 "nvme_iov_md": false 00:18:18.929 }, 00:18:18.929 "memory_domains": [ 00:18:18.929 { 00:18:18.929 "dma_device_id": "system", 00:18:18.929 "dma_device_type": 1 00:18:18.929 }, 00:18:18.929 { 00:18:18.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.929 "dma_device_type": 2 00:18:18.929 } 00:18:18.929 ], 00:18:18.929 "driver_specific": {} 00:18:18.929 } 00:18:18.929 ] 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.929 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.198 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.198 "name": "Existed_Raid", 00:18:19.198 "uuid": "8c3f856c-42d8-11ef-9ade-d5fc5159efa5", 00:18:19.198 "strip_size_kb": 0, 00:18:19.198 "state": "configuring", 00:18:19.198 "raid_level": "raid1", 00:18:19.198 "superblock": true, 00:18:19.198 "num_base_bdevs": 2, 00:18:19.198 "num_base_bdevs_discovered": 1, 00:18:19.198 "num_base_bdevs_operational": 2, 00:18:19.199 "base_bdevs_list": [ 00:18:19.199 { 00:18:19.199 "name": "BaseBdev1", 00:18:19.199 "uuid": "8c62ed56-42d8-11ef-9ade-d5fc5159efa5", 00:18:19.199 "is_configured": true, 00:18:19.199 "data_offset": 256, 00:18:19.199 "data_size": 7936 00:18:19.199 }, 00:18:19.199 { 00:18:19.199 "name": "BaseBdev2", 00:18:19.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.199 "is_configured": false, 00:18:19.199 "data_offset": 0, 00:18:19.199 "data_size": 0 00:18:19.199 } 00:18:19.199 ] 00:18:19.199 }' 00:18:19.199 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.199 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.482 18:32:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:19.740 [2024-07-15 18:32:12.115840] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.740 [2024-07-15 18:32:12.115894] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1115ea434500 name Existed_Raid, state configuring 00:18:19.740 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:20.304 [2024-07-15 18:32:12.407878] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.304 [2024-07-15 18:32:12.408732] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.304 [2024-07-15 18:32:12.408772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.304 "name": "Existed_Raid", 00:18:20.304 "uuid": "8d60da58-42d8-11ef-9ade-d5fc5159efa5", 00:18:20.304 "strip_size_kb": 0, 00:18:20.304 "state": "configuring", 00:18:20.304 "raid_level": "raid1", 00:18:20.304 "superblock": true, 00:18:20.304 "num_base_bdevs": 2, 00:18:20.304 "num_base_bdevs_discovered": 1, 00:18:20.304 "num_base_bdevs_operational": 2, 00:18:20.304 "base_bdevs_list": [ 00:18:20.304 { 00:18:20.304 "name": "BaseBdev1", 00:18:20.304 "uuid": "8c62ed56-42d8-11ef-9ade-d5fc5159efa5", 00:18:20.304 "is_configured": true, 00:18:20.304 "data_offset": 256, 00:18:20.304 "data_size": 7936 00:18:20.304 }, 00:18:20.304 { 00:18:20.304 "name": "BaseBdev2", 00:18:20.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.304 "is_configured": false, 00:18:20.304 "data_offset": 0, 00:18:20.304 "data_size": 0 00:18:20.304 } 00:18:20.304 ] 00:18:20.304 }' 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.304 18:32:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.870 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:21.127 [2024-07-15 18:32:13.276056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.127 [2024-07-15 18:32:13.276138] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1115ea434a00 00:18:21.127 [2024-07-15 18:32:13.276144] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:21.127 [2024-07-15 18:32:13.276182] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1115ea497e20 00:18:21.127 [2024-07-15 18:32:13.276198] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1115ea434a00 00:18:21.127 [2024-07-15 18:32:13.276202] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1115ea434a00 00:18:21.127 [2024-07-15 18:32:13.276223] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.127 BaseBdev2 00:18:21.127 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:21.127 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:21.127 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:21.127 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:18:21.127 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:21.127 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:21.127 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:21.385 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.642 [ 00:18:21.642 { 00:18:21.642 "name": "BaseBdev2", 00:18:21.642 "aliases": [ 00:18:21.642 "8de5504d-42d8-11ef-9ade-d5fc5159efa5" 00:18:21.642 ], 00:18:21.642 "product_name": "Malloc disk", 00:18:21.642 "block_size": 4128, 00:18:21.642 "num_blocks": 8192, 00:18:21.642 "uuid": "8de5504d-42d8-11ef-9ade-d5fc5159efa5", 00:18:21.642 "md_size": 32, 00:18:21.642 "md_interleave": true, 00:18:21.642 "dif_type": 0, 00:18:21.642 "assigned_rate_limits": { 00:18:21.642 "rw_ios_per_sec": 0, 00:18:21.642 "rw_mbytes_per_sec": 0, 00:18:21.642 "r_mbytes_per_sec": 0, 00:18:21.642 "w_mbytes_per_sec": 0 00:18:21.642 }, 00:18:21.642 "claimed": true, 00:18:21.642 "claim_type": "exclusive_write", 00:18:21.642 "zoned": false, 00:18:21.642 "supported_io_types": { 00:18:21.642 "read": true, 00:18:21.642 "write": true, 00:18:21.642 "unmap": true, 00:18:21.642 "flush": true, 00:18:21.642 "reset": true, 00:18:21.642 "nvme_admin": false, 00:18:21.642 "nvme_io": false, 00:18:21.642 "nvme_io_md": false, 00:18:21.642 "write_zeroes": true, 00:18:21.642 "zcopy": true, 00:18:21.642 "get_zone_info": false, 00:18:21.642 "zone_management": false, 00:18:21.642 "zone_append": false, 00:18:21.642 "compare": false, 00:18:21.642 "compare_and_write": false, 00:18:21.642 "abort": true, 00:18:21.642 "seek_hole": false, 00:18:21.642 "seek_data": false, 00:18:21.642 "copy": true, 00:18:21.642 "nvme_iov_md": false 00:18:21.642 }, 00:18:21.642 "memory_domains": [ 00:18:21.642 { 00:18:21.642 "dma_device_id": "system", 00:18:21.642 "dma_device_type": 1 00:18:21.642 }, 00:18:21.642 { 00:18:21.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.642 "dma_device_type": 2 00:18:21.642 } 00:18:21.642 ], 00:18:21.642 "driver_specific": {} 00:18:21.642 } 00:18:21.642 ] 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.642 18:32:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.899 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.899 "name": "Existed_Raid", 00:18:21.899 "uuid": "8d60da58-42d8-11ef-9ade-d5fc5159efa5", 00:18:21.899 "strip_size_kb": 0, 00:18:21.899 "state": "online", 00:18:21.899 "raid_level": "raid1", 00:18:21.899 "superblock": true, 00:18:21.899 "num_base_bdevs": 2, 00:18:21.899 "num_base_bdevs_discovered": 2, 00:18:21.899 "num_base_bdevs_operational": 2, 00:18:21.899 "base_bdevs_list": [ 00:18:21.899 { 00:18:21.899 "name": "BaseBdev1", 00:18:21.899 "uuid": "8c62ed56-42d8-11ef-9ade-d5fc5159efa5", 00:18:21.899 "is_configured": true, 00:18:21.899 "data_offset": 256, 00:18:21.899 "data_size": 7936 00:18:21.899 }, 00:18:21.899 { 00:18:21.899 "name": "BaseBdev2", 00:18:21.899 "uuid": "8de5504d-42d8-11ef-9ade-d5fc5159efa5", 00:18:21.899 "is_configured": true, 00:18:21.899 "data_offset": 256, 00:18:21.899 "data_size": 7936 00:18:21.899 } 00:18:21.899 ] 00:18:21.899 }' 00:18:21.899 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.899 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.157 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:22.157 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:22.157 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:22.157 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:22.157 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:22.157 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:18:22.157 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:22.157 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:22.416 [2024-07-15 18:32:14.656180] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.416 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:22.416 "name": "Existed_Raid", 00:18:22.416 "aliases": [ 00:18:22.416 "8d60da58-42d8-11ef-9ade-d5fc5159efa5" 00:18:22.416 ], 00:18:22.416 "product_name": "Raid Volume", 00:18:22.416 "block_size": 4128, 00:18:22.416 "num_blocks": 7936, 00:18:22.416 "uuid": "8d60da58-42d8-11ef-9ade-d5fc5159efa5", 00:18:22.416 "md_size": 32, 00:18:22.416 "md_interleave": true, 00:18:22.416 "dif_type": 0, 00:18:22.416 "assigned_rate_limits": { 00:18:22.416 "rw_ios_per_sec": 0, 00:18:22.416 "rw_mbytes_per_sec": 0, 00:18:22.416 "r_mbytes_per_sec": 0, 00:18:22.416 "w_mbytes_per_sec": 0 00:18:22.416 }, 00:18:22.416 "claimed": false, 00:18:22.416 "zoned": false, 00:18:22.416 "supported_io_types": { 00:18:22.416 "read": true, 00:18:22.416 "write": true, 00:18:22.416 "unmap": false, 00:18:22.416 "flush": false, 00:18:22.416 "reset": true, 00:18:22.416 "nvme_admin": false, 00:18:22.416 "nvme_io": false, 00:18:22.416 "nvme_io_md": false, 00:18:22.416 "write_zeroes": true, 00:18:22.416 "zcopy": false, 00:18:22.416 "get_zone_info": false, 00:18:22.416 "zone_management": false, 00:18:22.416 "zone_append": false, 00:18:22.416 "compare": false, 00:18:22.416 "compare_and_write": false, 00:18:22.416 "abort": false, 00:18:22.416 "seek_hole": false, 00:18:22.416 "seek_data": false, 00:18:22.416 "copy": false, 00:18:22.416 "nvme_iov_md": false 00:18:22.416 }, 00:18:22.416 "memory_domains": [ 00:18:22.416 { 00:18:22.416 "dma_device_id": "system", 00:18:22.416 "dma_device_type": 1 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.416 "dma_device_type": 2 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "dma_device_id": "system", 00:18:22.416 "dma_device_type": 1 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.416 "dma_device_type": 2 00:18:22.416 } 00:18:22.416 ], 00:18:22.416 "driver_specific": { 00:18:22.416 "raid": { 00:18:22.416 "uuid": "8d60da58-42d8-11ef-9ade-d5fc5159efa5", 00:18:22.416 "strip_size_kb": 0, 00:18:22.416 "state": "online", 00:18:22.416 "raid_level": "raid1", 00:18:22.416 "superblock": true, 00:18:22.416 "num_base_bdevs": 2, 00:18:22.416 "num_base_bdevs_discovered": 2, 00:18:22.416 "num_base_bdevs_operational": 2, 00:18:22.416 "base_bdevs_list": [ 00:18:22.416 { 00:18:22.416 "name": "BaseBdev1", 00:18:22.416 "uuid": "8c62ed56-42d8-11ef-9ade-d5fc5159efa5", 00:18:22.416 "is_configured": true, 00:18:22.416 "data_offset": 256, 00:18:22.416 "data_size": 7936 00:18:22.416 }, 00:18:22.416 { 00:18:22.416 "name": "BaseBdev2", 00:18:22.416 "uuid": "8de5504d-42d8-11ef-9ade-d5fc5159efa5", 00:18:22.416 "is_configured": true, 00:18:22.416 "data_offset": 256, 00:18:22.416 "data_size": 7936 00:18:22.416 } 00:18:22.416 ] 00:18:22.416 } 00:18:22.416 } 00:18:22.416 }' 00:18:22.416 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.416 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:22.416 BaseBdev2' 00:18:22.416 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:22.416 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:22.416 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:22.677 "name": "BaseBdev1", 00:18:22.677 "aliases": [ 00:18:22.677 "8c62ed56-42d8-11ef-9ade-d5fc5159efa5" 00:18:22.677 ], 00:18:22.677 "product_name": "Malloc disk", 00:18:22.677 "block_size": 4128, 00:18:22.677 "num_blocks": 8192, 00:18:22.677 "uuid": "8c62ed56-42d8-11ef-9ade-d5fc5159efa5", 00:18:22.677 "md_size": 32, 00:18:22.677 "md_interleave": true, 00:18:22.677 "dif_type": 0, 00:18:22.677 "assigned_rate_limits": { 00:18:22.677 "rw_ios_per_sec": 0, 00:18:22.677 "rw_mbytes_per_sec": 0, 00:18:22.677 "r_mbytes_per_sec": 0, 00:18:22.677 "w_mbytes_per_sec": 0 00:18:22.677 }, 00:18:22.677 "claimed": true, 00:18:22.677 "claim_type": "exclusive_write", 00:18:22.677 "zoned": false, 00:18:22.677 "supported_io_types": { 00:18:22.677 "read": true, 00:18:22.677 "write": true, 00:18:22.677 "unmap": true, 00:18:22.677 "flush": true, 00:18:22.677 "reset": true, 00:18:22.677 "nvme_admin": false, 00:18:22.677 "nvme_io": false, 00:18:22.677 "nvme_io_md": false, 00:18:22.677 "write_zeroes": true, 00:18:22.677 "zcopy": true, 00:18:22.677 "get_zone_info": false, 00:18:22.677 "zone_management": false, 00:18:22.677 "zone_append": false, 00:18:22.677 "compare": false, 00:18:22.677 "compare_and_write": false, 00:18:22.677 "abort": true, 00:18:22.677 "seek_hole": false, 00:18:22.677 "seek_data": false, 00:18:22.677 "copy": true, 00:18:22.677 "nvme_iov_md": false 00:18:22.677 }, 00:18:22.677 "memory_domains": [ 00:18:22.677 { 00:18:22.677 "dma_device_id": "system", 00:18:22.677 "dma_device_type": 1 00:18:22.677 }, 00:18:22.677 { 00:18:22.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.677 "dma_device_type": 2 00:18:22.677 } 00:18:22.677 ], 00:18:22.677 "driver_specific": {} 00:18:22.677 }' 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:22.677 18:32:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:22.936 "name": "BaseBdev2", 00:18:22.936 "aliases": [ 00:18:22.936 "8de5504d-42d8-11ef-9ade-d5fc5159efa5" 00:18:22.936 ], 00:18:22.936 "product_name": "Malloc disk", 00:18:22.936 "block_size": 4128, 00:18:22.936 "num_blocks": 8192, 00:18:22.936 "uuid": "8de5504d-42d8-11ef-9ade-d5fc5159efa5", 00:18:22.936 "md_size": 32, 00:18:22.936 "md_interleave": true, 00:18:22.936 "dif_type": 0, 00:18:22.936 "assigned_rate_limits": { 00:18:22.936 "rw_ios_per_sec": 0, 00:18:22.936 "rw_mbytes_per_sec": 0, 00:18:22.936 "r_mbytes_per_sec": 0, 00:18:22.936 "w_mbytes_per_sec": 0 00:18:22.936 }, 00:18:22.936 "claimed": true, 00:18:22.936 "claim_type": "exclusive_write", 00:18:22.936 "zoned": false, 00:18:22.936 "supported_io_types": { 00:18:22.936 "read": true, 00:18:22.936 "write": true, 00:18:22.936 "unmap": true, 00:18:22.936 "flush": true, 00:18:22.936 "reset": true, 00:18:22.936 "nvme_admin": false, 00:18:22.936 "nvme_io": false, 00:18:22.936 "nvme_io_md": false, 00:18:22.936 "write_zeroes": true, 00:18:22.936 "zcopy": true, 00:18:22.936 "get_zone_info": false, 00:18:22.936 "zone_management": false, 00:18:22.936 "zone_append": false, 00:18:22.936 "compare": false, 00:18:22.936 "compare_and_write": false, 00:18:22.936 "abort": true, 00:18:22.936 "seek_hole": false, 00:18:22.936 "seek_data": false, 00:18:22.936 "copy": true, 00:18:22.936 "nvme_iov_md": false 00:18:22.936 }, 00:18:22.936 "memory_domains": [ 00:18:22.936 { 00:18:22.936 "dma_device_id": "system", 00:18:22.936 "dma_device_type": 1 00:18:22.936 }, 00:18:22.936 { 00:18:22.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.936 "dma_device_type": 2 00:18:22.936 } 00:18:22.936 ], 00:18:22.936 "driver_specific": {} 00:18:22.936 }' 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:22.936 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:23.200 [2024-07-15 18:32:15.536230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:23.200 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.201 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.498 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.498 "name": "Existed_Raid", 00:18:23.498 "uuid": "8d60da58-42d8-11ef-9ade-d5fc5159efa5", 00:18:23.498 "strip_size_kb": 0, 00:18:23.498 "state": "online", 00:18:23.499 "raid_level": "raid1", 00:18:23.499 "superblock": true, 00:18:23.499 "num_base_bdevs": 2, 00:18:23.499 "num_base_bdevs_discovered": 1, 00:18:23.499 "num_base_bdevs_operational": 1, 00:18:23.499 "base_bdevs_list": [ 00:18:23.499 { 00:18:23.499 "name": null, 00:18:23.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.499 "is_configured": false, 00:18:23.499 "data_offset": 256, 00:18:23.499 "data_size": 7936 00:18:23.499 }, 00:18:23.499 { 00:18:23.499 "name": "BaseBdev2", 00:18:23.499 "uuid": "8de5504d-42d8-11ef-9ade-d5fc5159efa5", 00:18:23.499 "is_configured": true, 00:18:23.499 "data_offset": 256, 00:18:23.499 "data_size": 7936 00:18:23.499 } 00:18:23.499 ] 00:18:23.499 }' 00:18:23.499 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.499 18:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.757 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:23.757 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:23.757 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:23.757 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.017 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:24.017 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:24.017 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:24.275 [2024-07-15 18:32:16.650454] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:24.275 [2024-07-15 18:32:16.650521] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.275 [2024-07-15 18:32:16.658960] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.275 [2024-07-15 18:32:16.658981] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.275 [2024-07-15 18:32:16.658987] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1115ea434a00 name Existed_Raid, state offline 00:18:24.275 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:24.275 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:24.275 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 66928 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66928 ']' 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66928 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66928 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66928' 00:18:24.533 killing process with pid 66928 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 66928 00:18:24.533 [2024-07-15 18:32:16.912004] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.533 [2024-07-15 18:32:16.912035] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.533 18:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 66928 00:18:24.792 18:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:18:24.792 00:18:24.792 real 0m9.001s 00:18:24.792 user 0m15.628s 00:18:24.792 sys 0m1.564s 00:18:24.792 18:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:24.792 ************************************ 00:18:24.792 END TEST raid_state_function_test_sb_md_interleaved 00:18:24.792 ************************************ 00:18:24.792 18:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.792 18:32:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:24.792 18:32:17 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:24.792 18:32:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:24.792 18:32:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.792 18:32:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.792 ************************************ 00:18:24.792 START TEST raid_superblock_test_md_interleaved 00:18:24.792 ************************************ 00:18:24.792 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:18:24.792 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:24.792 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:24.792 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=67202 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 67202 /var/tmp/spdk-raid.sock 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67202 ']' 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:24.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.793 18:32:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.793 [2024-07-15 18:32:17.185839] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:18:24.793 [2024-07-15 18:32:17.186101] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:25.728 EAL: TSC is not safe to use in SMP mode 00:18:25.728 EAL: TSC is not invariant 00:18:25.728 [2024-07-15 18:32:17.790748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.728 [2024-07-15 18:32:17.901204] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:25.728 [2024-07-15 18:32:17.903341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.728 [2024-07-15 18:32:17.904147] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.728 [2024-07-15 18:32:17.904163] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:25.986 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:26.243 malloc1 00:18:26.243 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:26.501 [2024-07-15 18:32:18.689034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:26.501 [2024-07-15 18:32:18.689115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.501 [2024-07-15 18:32:18.689145] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2a29f234780 00:18:26.501 [2024-07-15 18:32:18.689153] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.501 [2024-07-15 18:32:18.690014] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.501 [2024-07-15 18:32:18.690071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:26.501 pt1 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.501 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:26.759 malloc2 00:18:26.759 18:32:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.759 [2024-07-15 18:32:19.153069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.759 [2024-07-15 18:32:19.153131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.759 [2024-07-15 18:32:19.153144] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2a29f234c80 00:18:26.759 [2024-07-15 18:32:19.153153] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.759 [2024-07-15 18:32:19.153751] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.759 [2024-07-15 18:32:19.153772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.759 pt2 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:27.017 [2024-07-15 18:32:19.389106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:27.017 [2024-07-15 18:32:19.389701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.017 [2024-07-15 18:32:19.389762] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2a29f234f00 00:18:27.017 [2024-07-15 18:32:19.389769] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.017 [2024-07-15 18:32:19.389809] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2a29f297e20 00:18:27.017 [2024-07-15 18:32:19.389825] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2a29f234f00 00:18:27.017 [2024-07-15 18:32:19.389829] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2a29f234f00 00:18:27.017 [2024-07-15 18:32:19.389843] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.017 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.277 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.277 "name": "raid_bdev1", 00:18:27.277 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:27.277 "strip_size_kb": 0, 00:18:27.277 "state": "online", 00:18:27.277 "raid_level": "raid1", 00:18:27.277 "superblock": true, 00:18:27.277 "num_base_bdevs": 2, 00:18:27.277 "num_base_bdevs_discovered": 2, 00:18:27.277 "num_base_bdevs_operational": 2, 00:18:27.277 "base_bdevs_list": [ 00:18:27.277 { 00:18:27.277 "name": "pt1", 00:18:27.277 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.277 "is_configured": true, 00:18:27.277 "data_offset": 256, 00:18:27.277 "data_size": 7936 00:18:27.277 }, 00:18:27.277 { 00:18:27.277 "name": "pt2", 00:18:27.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.277 "is_configured": true, 00:18:27.277 "data_offset": 256, 00:18:27.277 "data_size": 7936 00:18:27.277 } 00:18:27.277 ] 00:18:27.277 }' 00:18:27.277 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.277 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.852 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:27.852 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:27.852 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:27.852 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:27.853 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:27.853 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:18:27.853 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:27.853 18:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:27.853 [2024-07-15 18:32:20.241211] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.112 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:28.112 "name": "raid_bdev1", 00:18:28.112 "aliases": [ 00:18:28.112 "918a1a86-42d8-11ef-9ade-d5fc5159efa5" 00:18:28.112 ], 00:18:28.112 "product_name": "Raid Volume", 00:18:28.112 "block_size": 4128, 00:18:28.112 "num_blocks": 7936, 00:18:28.112 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:28.112 "md_size": 32, 00:18:28.112 "md_interleave": true, 00:18:28.112 "dif_type": 0, 00:18:28.112 "assigned_rate_limits": { 00:18:28.112 "rw_ios_per_sec": 0, 00:18:28.112 "rw_mbytes_per_sec": 0, 00:18:28.112 "r_mbytes_per_sec": 0, 00:18:28.112 "w_mbytes_per_sec": 0 00:18:28.112 }, 00:18:28.112 "claimed": false, 00:18:28.112 "zoned": false, 00:18:28.112 "supported_io_types": { 00:18:28.112 "read": true, 00:18:28.112 "write": true, 00:18:28.112 "unmap": false, 00:18:28.112 "flush": false, 00:18:28.112 "reset": true, 00:18:28.112 "nvme_admin": false, 00:18:28.112 "nvme_io": false, 00:18:28.112 "nvme_io_md": false, 00:18:28.112 "write_zeroes": true, 00:18:28.112 "zcopy": false, 00:18:28.112 "get_zone_info": false, 00:18:28.112 "zone_management": false, 00:18:28.112 "zone_append": false, 00:18:28.112 "compare": false, 00:18:28.112 "compare_and_write": false, 00:18:28.112 "abort": false, 00:18:28.112 "seek_hole": false, 00:18:28.112 "seek_data": false, 00:18:28.112 "copy": false, 00:18:28.112 "nvme_iov_md": false 00:18:28.112 }, 00:18:28.112 "memory_domains": [ 00:18:28.112 { 00:18:28.112 "dma_device_id": "system", 00:18:28.112 "dma_device_type": 1 00:18:28.112 }, 00:18:28.112 { 00:18:28.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.112 "dma_device_type": 2 00:18:28.112 }, 00:18:28.112 { 00:18:28.112 "dma_device_id": "system", 00:18:28.112 "dma_device_type": 1 00:18:28.112 }, 00:18:28.112 { 00:18:28.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.112 "dma_device_type": 2 00:18:28.112 } 00:18:28.112 ], 00:18:28.112 "driver_specific": { 00:18:28.112 "raid": { 00:18:28.112 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:28.112 "strip_size_kb": 0, 00:18:28.112 "state": "online", 00:18:28.112 "raid_level": "raid1", 00:18:28.112 "superblock": true, 00:18:28.112 "num_base_bdevs": 2, 00:18:28.112 "num_base_bdevs_discovered": 2, 00:18:28.112 "num_base_bdevs_operational": 2, 00:18:28.112 "base_bdevs_list": [ 00:18:28.112 { 00:18:28.112 "name": "pt1", 00:18:28.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.112 "is_configured": true, 00:18:28.112 "data_offset": 256, 00:18:28.112 "data_size": 7936 00:18:28.112 }, 00:18:28.112 { 00:18:28.112 "name": "pt2", 00:18:28.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.112 "is_configured": true, 00:18:28.112 "data_offset": 256, 00:18:28.112 "data_size": 7936 00:18:28.112 } 00:18:28.112 ] 00:18:28.112 } 00:18:28.112 } 00:18:28.112 }' 00:18:28.112 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:28.112 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:28.112 pt2' 00:18:28.112 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.112 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:28.112 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.112 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.112 "name": "pt1", 00:18:28.112 "aliases": [ 00:18:28.112 "00000000-0000-0000-0000-000000000001" 00:18:28.112 ], 00:18:28.112 "product_name": "passthru", 00:18:28.112 "block_size": 4128, 00:18:28.112 "num_blocks": 8192, 00:18:28.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.112 "md_size": 32, 00:18:28.112 "md_interleave": true, 00:18:28.112 "dif_type": 0, 00:18:28.112 "assigned_rate_limits": { 00:18:28.112 "rw_ios_per_sec": 0, 00:18:28.112 "rw_mbytes_per_sec": 0, 00:18:28.112 "r_mbytes_per_sec": 0, 00:18:28.112 "w_mbytes_per_sec": 0 00:18:28.112 }, 00:18:28.112 "claimed": true, 00:18:28.112 "claim_type": "exclusive_write", 00:18:28.112 "zoned": false, 00:18:28.112 "supported_io_types": { 00:18:28.112 "read": true, 00:18:28.112 "write": true, 00:18:28.112 "unmap": true, 00:18:28.112 "flush": true, 00:18:28.112 "reset": true, 00:18:28.112 "nvme_admin": false, 00:18:28.112 "nvme_io": false, 00:18:28.112 "nvme_io_md": false, 00:18:28.112 "write_zeroes": true, 00:18:28.112 "zcopy": true, 00:18:28.112 "get_zone_info": false, 00:18:28.112 "zone_management": false, 00:18:28.112 "zone_append": false, 00:18:28.112 "compare": false, 00:18:28.112 "compare_and_write": false, 00:18:28.112 "abort": true, 00:18:28.112 "seek_hole": false, 00:18:28.112 "seek_data": false, 00:18:28.112 "copy": true, 00:18:28.112 "nvme_iov_md": false 00:18:28.112 }, 00:18:28.112 "memory_domains": [ 00:18:28.112 { 00:18:28.112 "dma_device_id": "system", 00:18:28.112 "dma_device_type": 1 00:18:28.112 }, 00:18:28.112 { 00:18:28.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.112 "dma_device_type": 2 00:18:28.112 } 00:18:28.112 ], 00:18:28.112 "driver_specific": { 00:18:28.112 "passthru": { 00:18:28.112 "name": "pt1", 00:18:28.112 "base_bdev_name": "malloc1" 00:18:28.112 } 00:18:28.112 } 00:18:28.112 }' 00:18:28.112 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:28.372 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.631 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.631 "name": "pt2", 00:18:28.631 "aliases": [ 00:18:28.631 "00000000-0000-0000-0000-000000000002" 00:18:28.631 ], 00:18:28.631 "product_name": "passthru", 00:18:28.631 "block_size": 4128, 00:18:28.631 "num_blocks": 8192, 00:18:28.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.631 "md_size": 32, 00:18:28.631 "md_interleave": true, 00:18:28.631 "dif_type": 0, 00:18:28.631 "assigned_rate_limits": { 00:18:28.631 "rw_ios_per_sec": 0, 00:18:28.631 "rw_mbytes_per_sec": 0, 00:18:28.631 "r_mbytes_per_sec": 0, 00:18:28.631 "w_mbytes_per_sec": 0 00:18:28.631 }, 00:18:28.631 "claimed": true, 00:18:28.631 "claim_type": "exclusive_write", 00:18:28.631 "zoned": false, 00:18:28.631 "supported_io_types": { 00:18:28.631 "read": true, 00:18:28.631 "write": true, 00:18:28.631 "unmap": true, 00:18:28.631 "flush": true, 00:18:28.631 "reset": true, 00:18:28.631 "nvme_admin": false, 00:18:28.631 "nvme_io": false, 00:18:28.631 "nvme_io_md": false, 00:18:28.631 "write_zeroes": true, 00:18:28.631 "zcopy": true, 00:18:28.631 "get_zone_info": false, 00:18:28.631 "zone_management": false, 00:18:28.631 "zone_append": false, 00:18:28.631 "compare": false, 00:18:28.631 "compare_and_write": false, 00:18:28.631 "abort": true, 00:18:28.631 "seek_hole": false, 00:18:28.631 "seek_data": false, 00:18:28.631 "copy": true, 00:18:28.631 "nvme_iov_md": false 00:18:28.631 }, 00:18:28.631 "memory_domains": [ 00:18:28.631 { 00:18:28.631 "dma_device_id": "system", 00:18:28.631 "dma_device_type": 1 00:18:28.631 }, 00:18:28.631 { 00:18:28.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.631 "dma_device_type": 2 00:18:28.631 } 00:18:28.631 ], 00:18:28.631 "driver_specific": { 00:18:28.631 "passthru": { 00:18:28.631 "name": "pt2", 00:18:28.631 "base_bdev_name": "malloc2" 00:18:28.631 } 00:18:28.631 } 00:18:28.631 }' 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:28.632 18:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:28.891 [2024-07-15 18:32:21.125298] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.891 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=918a1a86-42d8-11ef-9ade-d5fc5159efa5 00:18:28.891 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 918a1a86-42d8-11ef-9ade-d5fc5159efa5 ']' 00:18:28.891 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:29.149 [2024-07-15 18:32:21.417289] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.149 [2024-07-15 18:32:21.417321] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.149 [2024-07-15 18:32:21.417347] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.149 [2024-07-15 18:32:21.417362] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.149 [2024-07-15 18:32:21.417366] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a29f234f00 name raid_bdev1, state offline 00:18:29.149 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.149 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:29.407 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:29.407 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:29.407 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.407 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:29.664 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.664 18:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:29.922 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:29.922 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:30.180 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:30.438 [2024-07-15 18:32:22.729400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:30.438 [2024-07-15 18:32:22.730061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:30.438 [2024-07-15 18:32:22.730089] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:30.438 [2024-07-15 18:32:22.730133] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:30.438 [2024-07-15 18:32:22.730146] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.438 [2024-07-15 18:32:22.730150] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a29f234c80 name raid_bdev1, state configuring 00:18:30.438 request: 00:18:30.438 { 00:18:30.438 "name": "raid_bdev1", 00:18:30.438 "raid_level": "raid1", 00:18:30.438 "base_bdevs": [ 00:18:30.438 "malloc1", 00:18:30.438 "malloc2" 00:18:30.438 ], 00:18:30.438 "superblock": false, 00:18:30.438 "method": "bdev_raid_create", 00:18:30.438 "req_id": 1 00:18:30.438 } 00:18:30.438 Got JSON-RPC error response 00:18:30.438 response: 00:18:30.438 { 00:18:30.438 "code": -17, 00:18:30.438 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:30.438 } 00:18:30.438 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:18:30.438 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.438 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.438 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.438 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.438 18:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:30.696 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:30.696 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:30.697 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.955 [2024-07-15 18:32:23.273435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.955 [2024-07-15 18:32:23.273508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.955 [2024-07-15 18:32:23.273535] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2a29f234780 00:18:30.955 [2024-07-15 18:32:23.273544] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.955 [2024-07-15 18:32:23.274192] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.955 [2024-07-15 18:32:23.274216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.955 [2024-07-15 18:32:23.274248] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:30.955 [2024-07-15 18:32:23.274261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.955 pt1 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.955 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.213 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.213 "name": "raid_bdev1", 00:18:31.213 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:31.213 "strip_size_kb": 0, 00:18:31.213 "state": "configuring", 00:18:31.213 "raid_level": "raid1", 00:18:31.213 "superblock": true, 00:18:31.213 "num_base_bdevs": 2, 00:18:31.213 "num_base_bdevs_discovered": 1, 00:18:31.213 "num_base_bdevs_operational": 2, 00:18:31.213 "base_bdevs_list": [ 00:18:31.213 { 00:18:31.213 "name": "pt1", 00:18:31.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.213 "is_configured": true, 00:18:31.213 "data_offset": 256, 00:18:31.213 "data_size": 7936 00:18:31.213 }, 00:18:31.213 { 00:18:31.213 "name": null, 00:18:31.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.213 "is_configured": false, 00:18:31.213 "data_offset": 256, 00:18:31.213 "data_size": 7936 00:18:31.213 } 00:18:31.213 ] 00:18:31.213 }' 00:18:31.213 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.213 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.780 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:31.780 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:31.780 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:31.780 18:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:31.780 [2024-07-15 18:32:24.153518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:31.780 [2024-07-15 18:32:24.153650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.780 [2024-07-15 18:32:24.153663] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2a29f234f00 00:18:31.780 [2024-07-15 18:32:24.153671] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.780 [2024-07-15 18:32:24.153729] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.780 [2024-07-15 18:32:24.153739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:31.780 [2024-07-15 18:32:24.153757] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:31.780 [2024-07-15 18:32:24.153766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.780 [2024-07-15 18:32:24.153788] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2a29f235180 00:18:31.780 [2024-07-15 18:32:24.153792] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:31.780 [2024-07-15 18:32:24.153828] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2a29f297e20 00:18:31.780 [2024-07-15 18:32:24.153841] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2a29f235180 00:18:31.780 [2024-07-15 18:32:24.153844] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2a29f235180 00:18:31.780 [2024-07-15 18:32:24.153856] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.780 pt2 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.780 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.039 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.039 "name": "raid_bdev1", 00:18:32.039 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:32.039 "strip_size_kb": 0, 00:18:32.039 "state": "online", 00:18:32.039 "raid_level": "raid1", 00:18:32.039 "superblock": true, 00:18:32.039 "num_base_bdevs": 2, 00:18:32.039 "num_base_bdevs_discovered": 2, 00:18:32.039 "num_base_bdevs_operational": 2, 00:18:32.039 "base_bdevs_list": [ 00:18:32.039 { 00:18:32.039 "name": "pt1", 00:18:32.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.039 "is_configured": true, 00:18:32.039 "data_offset": 256, 00:18:32.039 "data_size": 7936 00:18:32.039 }, 00:18:32.039 { 00:18:32.039 "name": "pt2", 00:18:32.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.039 "is_configured": true, 00:18:32.039 "data_offset": 256, 00:18:32.039 "data_size": 7936 00:18:32.039 } 00:18:32.039 ] 00:18:32.039 }' 00:18:32.039 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.039 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.606 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:32.606 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:32.606 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:32.606 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:32.606 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:32.606 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:18:32.606 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:32.606 18:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:32.865 [2024-07-15 18:32:25.025690] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.865 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:32.865 "name": "raid_bdev1", 00:18:32.865 "aliases": [ 00:18:32.865 "918a1a86-42d8-11ef-9ade-d5fc5159efa5" 00:18:32.865 ], 00:18:32.865 "product_name": "Raid Volume", 00:18:32.865 "block_size": 4128, 00:18:32.865 "num_blocks": 7936, 00:18:32.865 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:32.865 "md_size": 32, 00:18:32.865 "md_interleave": true, 00:18:32.865 "dif_type": 0, 00:18:32.865 "assigned_rate_limits": { 00:18:32.865 "rw_ios_per_sec": 0, 00:18:32.865 "rw_mbytes_per_sec": 0, 00:18:32.865 "r_mbytes_per_sec": 0, 00:18:32.865 "w_mbytes_per_sec": 0 00:18:32.865 }, 00:18:32.865 "claimed": false, 00:18:32.865 "zoned": false, 00:18:32.865 "supported_io_types": { 00:18:32.865 "read": true, 00:18:32.865 "write": true, 00:18:32.865 "unmap": false, 00:18:32.865 "flush": false, 00:18:32.865 "reset": true, 00:18:32.865 "nvme_admin": false, 00:18:32.865 "nvme_io": false, 00:18:32.865 "nvme_io_md": false, 00:18:32.865 "write_zeroes": true, 00:18:32.865 "zcopy": false, 00:18:32.865 "get_zone_info": false, 00:18:32.865 "zone_management": false, 00:18:32.865 "zone_append": false, 00:18:32.865 "compare": false, 00:18:32.865 "compare_and_write": false, 00:18:32.865 "abort": false, 00:18:32.865 "seek_hole": false, 00:18:32.865 "seek_data": false, 00:18:32.865 "copy": false, 00:18:32.865 "nvme_iov_md": false 00:18:32.865 }, 00:18:32.865 "memory_domains": [ 00:18:32.865 { 00:18:32.865 "dma_device_id": "system", 00:18:32.865 "dma_device_type": 1 00:18:32.865 }, 00:18:32.865 { 00:18:32.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.865 "dma_device_type": 2 00:18:32.865 }, 00:18:32.865 { 00:18:32.865 "dma_device_id": "system", 00:18:32.865 "dma_device_type": 1 00:18:32.865 }, 00:18:32.865 { 00:18:32.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.865 "dma_device_type": 2 00:18:32.865 } 00:18:32.865 ], 00:18:32.865 "driver_specific": { 00:18:32.865 "raid": { 00:18:32.865 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:32.865 "strip_size_kb": 0, 00:18:32.865 "state": "online", 00:18:32.865 "raid_level": "raid1", 00:18:32.865 "superblock": true, 00:18:32.865 "num_base_bdevs": 2, 00:18:32.865 "num_base_bdevs_discovered": 2, 00:18:32.865 "num_base_bdevs_operational": 2, 00:18:32.865 "base_bdevs_list": [ 00:18:32.865 { 00:18:32.865 "name": "pt1", 00:18:32.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.865 "is_configured": true, 00:18:32.865 "data_offset": 256, 00:18:32.865 "data_size": 7936 00:18:32.865 }, 00:18:32.865 { 00:18:32.865 "name": "pt2", 00:18:32.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.865 "is_configured": true, 00:18:32.865 "data_offset": 256, 00:18:32.865 "data_size": 7936 00:18:32.865 } 00:18:32.865 ] 00:18:32.865 } 00:18:32.865 } 00:18:32.865 }' 00:18:32.865 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.865 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:32.865 pt2' 00:18:32.865 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.865 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:32.865 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:33.124 "name": "pt1", 00:18:33.124 "aliases": [ 00:18:33.124 "00000000-0000-0000-0000-000000000001" 00:18:33.124 ], 00:18:33.124 "product_name": "passthru", 00:18:33.124 "block_size": 4128, 00:18:33.124 "num_blocks": 8192, 00:18:33.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.124 "md_size": 32, 00:18:33.124 "md_interleave": true, 00:18:33.124 "dif_type": 0, 00:18:33.124 "assigned_rate_limits": { 00:18:33.124 "rw_ios_per_sec": 0, 00:18:33.124 "rw_mbytes_per_sec": 0, 00:18:33.124 "r_mbytes_per_sec": 0, 00:18:33.124 "w_mbytes_per_sec": 0 00:18:33.124 }, 00:18:33.124 "claimed": true, 00:18:33.124 "claim_type": "exclusive_write", 00:18:33.124 "zoned": false, 00:18:33.124 "supported_io_types": { 00:18:33.124 "read": true, 00:18:33.124 "write": true, 00:18:33.124 "unmap": true, 00:18:33.124 "flush": true, 00:18:33.124 "reset": true, 00:18:33.124 "nvme_admin": false, 00:18:33.124 "nvme_io": false, 00:18:33.124 "nvme_io_md": false, 00:18:33.124 "write_zeroes": true, 00:18:33.124 "zcopy": true, 00:18:33.124 "get_zone_info": false, 00:18:33.124 "zone_management": false, 00:18:33.124 "zone_append": false, 00:18:33.124 "compare": false, 00:18:33.124 "compare_and_write": false, 00:18:33.124 "abort": true, 00:18:33.124 "seek_hole": false, 00:18:33.124 "seek_data": false, 00:18:33.124 "copy": true, 00:18:33.124 "nvme_iov_md": false 00:18:33.124 }, 00:18:33.124 "memory_domains": [ 00:18:33.124 { 00:18:33.124 "dma_device_id": "system", 00:18:33.124 "dma_device_type": 1 00:18:33.124 }, 00:18:33.124 { 00:18:33.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.124 "dma_device_type": 2 00:18:33.124 } 00:18:33.124 ], 00:18:33.124 "driver_specific": { 00:18:33.124 "passthru": { 00:18:33.124 "name": "pt1", 00:18:33.124 "base_bdev_name": "malloc1" 00:18:33.124 } 00:18:33.124 } 00:18:33.124 }' 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:33.124 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:33.383 "name": "pt2", 00:18:33.383 "aliases": [ 00:18:33.383 "00000000-0000-0000-0000-000000000002" 00:18:33.383 ], 00:18:33.383 "product_name": "passthru", 00:18:33.383 "block_size": 4128, 00:18:33.383 "num_blocks": 8192, 00:18:33.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.383 "md_size": 32, 00:18:33.383 "md_interleave": true, 00:18:33.383 "dif_type": 0, 00:18:33.383 "assigned_rate_limits": { 00:18:33.383 "rw_ios_per_sec": 0, 00:18:33.383 "rw_mbytes_per_sec": 0, 00:18:33.383 "r_mbytes_per_sec": 0, 00:18:33.383 "w_mbytes_per_sec": 0 00:18:33.383 }, 00:18:33.383 "claimed": true, 00:18:33.383 "claim_type": "exclusive_write", 00:18:33.383 "zoned": false, 00:18:33.383 "supported_io_types": { 00:18:33.383 "read": true, 00:18:33.383 "write": true, 00:18:33.383 "unmap": true, 00:18:33.383 "flush": true, 00:18:33.383 "reset": true, 00:18:33.383 "nvme_admin": false, 00:18:33.383 "nvme_io": false, 00:18:33.383 "nvme_io_md": false, 00:18:33.383 "write_zeroes": true, 00:18:33.383 "zcopy": true, 00:18:33.383 "get_zone_info": false, 00:18:33.383 "zone_management": false, 00:18:33.383 "zone_append": false, 00:18:33.383 "compare": false, 00:18:33.383 "compare_and_write": false, 00:18:33.383 "abort": true, 00:18:33.383 "seek_hole": false, 00:18:33.383 "seek_data": false, 00:18:33.383 "copy": true, 00:18:33.383 "nvme_iov_md": false 00:18:33.383 }, 00:18:33.383 "memory_domains": [ 00:18:33.383 { 00:18:33.383 "dma_device_id": "system", 00:18:33.383 "dma_device_type": 1 00:18:33.383 }, 00:18:33.383 { 00:18:33.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.383 "dma_device_type": 2 00:18:33.383 } 00:18:33.383 ], 00:18:33.383 "driver_specific": { 00:18:33.383 "passthru": { 00:18:33.383 "name": "pt2", 00:18:33.383 "base_bdev_name": "malloc2" 00:18:33.383 } 00:18:33.383 } 00:18:33.383 }' 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:33.383 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:33.643 [2024-07-15 18:32:25.969793] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.643 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 918a1a86-42d8-11ef-9ade-d5fc5159efa5 '!=' 918a1a86-42d8-11ef-9ade-d5fc5159efa5 ']' 00:18:33.643 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:33.643 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:33.643 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:18:33.643 18:32:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:33.901 [2024-07-15 18:32:26.253820] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.901 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.159 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.159 "name": "raid_bdev1", 00:18:34.159 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:34.159 "strip_size_kb": 0, 00:18:34.159 "state": "online", 00:18:34.159 "raid_level": "raid1", 00:18:34.159 "superblock": true, 00:18:34.159 "num_base_bdevs": 2, 00:18:34.159 "num_base_bdevs_discovered": 1, 00:18:34.159 "num_base_bdevs_operational": 1, 00:18:34.159 "base_bdevs_list": [ 00:18:34.159 { 00:18:34.159 "name": null, 00:18:34.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.159 "is_configured": false, 00:18:34.159 "data_offset": 256, 00:18:34.159 "data_size": 7936 00:18:34.159 }, 00:18:34.159 { 00:18:34.159 "name": "pt2", 00:18:34.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.159 "is_configured": true, 00:18:34.159 "data_offset": 256, 00:18:34.159 "data_size": 7936 00:18:34.159 } 00:18:34.159 ] 00:18:34.159 }' 00:18:34.159 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.159 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.726 18:32:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:34.726 [2024-07-15 18:32:27.037888] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.726 [2024-07-15 18:32:27.037930] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.726 [2024-07-15 18:32:27.037968] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.726 [2024-07-15 18:32:27.037980] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.726 [2024-07-15 18:32:27.037984] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a29f235180 name raid_bdev1, state offline 00:18:34.726 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.726 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:34.984 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:34.984 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:34.984 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:34.984 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:34.984 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:35.264 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:35.264 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:35.264 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:35.264 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:35.264 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:18:35.264 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.552 [2024-07-15 18:32:27.817963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.552 [2024-07-15 18:32:27.818016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.552 [2024-07-15 18:32:27.818028] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2a29f234f00 00:18:35.552 [2024-07-15 18:32:27.818037] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.552 [2024-07-15 18:32:27.818642] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.552 [2024-07-15 18:32:27.818667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.552 [2024-07-15 18:32:27.818689] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:35.552 [2024-07-15 18:32:27.818701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.552 [2024-07-15 18:32:27.818719] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2a29f235180 00:18:35.552 [2024-07-15 18:32:27.818723] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:35.552 [2024-07-15 18:32:27.818743] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2a29f297e20 00:18:35.552 [2024-07-15 18:32:27.818756] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2a29f235180 00:18:35.552 [2024-07-15 18:32:27.818760] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2a29f235180 00:18:35.552 [2024-07-15 18:32:27.818779] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.552 pt2 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.552 18:32:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.817 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.817 "name": "raid_bdev1", 00:18:35.817 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:35.817 "strip_size_kb": 0, 00:18:35.817 "state": "online", 00:18:35.817 "raid_level": "raid1", 00:18:35.817 "superblock": true, 00:18:35.817 "num_base_bdevs": 2, 00:18:35.817 "num_base_bdevs_discovered": 1, 00:18:35.817 "num_base_bdevs_operational": 1, 00:18:35.817 "base_bdevs_list": [ 00:18:35.817 { 00:18:35.817 "name": null, 00:18:35.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.817 "is_configured": false, 00:18:35.817 "data_offset": 256, 00:18:35.817 "data_size": 7936 00:18:35.817 }, 00:18:35.817 { 00:18:35.817 "name": "pt2", 00:18:35.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.817 "is_configured": true, 00:18:35.817 "data_offset": 256, 00:18:35.817 "data_size": 7936 00:18:35.817 } 00:18:35.817 ] 00:18:35.817 }' 00:18:35.817 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.817 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.076 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:36.335 [2024-07-15 18:32:28.594041] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.335 [2024-07-15 18:32:28.594065] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.335 [2024-07-15 18:32:28.594104] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.335 [2024-07-15 18:32:28.594116] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.335 [2024-07-15 18:32:28.594120] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a29f235180 name raid_bdev1, state offline 00:18:36.335 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.335 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:36.592 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:36.592 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:36.592 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:36.592 18:32:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:36.850 [2024-07-15 18:32:29.114111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:36.850 [2024-07-15 18:32:29.114187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.850 [2024-07-15 18:32:29.114215] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2a29f234c80 00:18:36.850 [2024-07-15 18:32:29.114223] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.850 [2024-07-15 18:32:29.114868] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.850 [2024-07-15 18:32:29.114892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:36.850 [2024-07-15 18:32:29.114915] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:36.850 [2024-07-15 18:32:29.114927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:36.850 [2024-07-15 18:32:29.114949] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:36.850 [2024-07-15 18:32:29.114953] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.850 [2024-07-15 18:32:29.114959] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a29f234780 name raid_bdev1, state configuring 00:18:36.850 [2024-07-15 18:32:29.114971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.850 [2024-07-15 18:32:29.114987] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2a29f234780 00:18:36.850 [2024-07-15 18:32:29.114991] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:36.850 [2024-07-15 18:32:29.115010] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2a29f297e20 00:18:36.850 [2024-07-15 18:32:29.115023] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2a29f234780 00:18:36.850 [2024-07-15 18:32:29.115026] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2a29f234780 00:18:36.850 [2024-07-15 18:32:29.115036] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.850 pt1 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.850 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.109 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.109 "name": "raid_bdev1", 00:18:37.109 "uuid": "918a1a86-42d8-11ef-9ade-d5fc5159efa5", 00:18:37.109 "strip_size_kb": 0, 00:18:37.109 "state": "online", 00:18:37.109 "raid_level": "raid1", 00:18:37.109 "superblock": true, 00:18:37.109 "num_base_bdevs": 2, 00:18:37.109 "num_base_bdevs_discovered": 1, 00:18:37.109 "num_base_bdevs_operational": 1, 00:18:37.109 "base_bdevs_list": [ 00:18:37.109 { 00:18:37.109 "name": null, 00:18:37.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.109 "is_configured": false, 00:18:37.109 "data_offset": 256, 00:18:37.109 "data_size": 7936 00:18:37.109 }, 00:18:37.109 { 00:18:37.109 "name": "pt2", 00:18:37.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.109 "is_configured": true, 00:18:37.109 "data_offset": 256, 00:18:37.109 "data_size": 7936 00:18:37.109 } 00:18:37.109 ] 00:18:37.109 }' 00:18:37.109 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.109 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.367 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:37.367 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:37.625 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:37.625 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:37.625 18:32:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.884 [2024-07-15 18:32:30.242246] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 918a1a86-42d8-11ef-9ade-d5fc5159efa5 '!=' 918a1a86-42d8-11ef-9ade-d5fc5159efa5 ']' 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 67202 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67202 ']' 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67202 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67202 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:37.884 killing process with pid 67202 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67202' 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 67202 00:18:37.884 [2024-07-15 18:32:30.270700] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.884 [2024-07-15 18:32:30.270722] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.884 [2024-07-15 18:32:30.270733] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.884 [2024-07-15 18:32:30.270738] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a29f234780 name raid_bdev1, state offline 00:18:37.884 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 67202 00:18:37.884 [2024-07-15 18:32:30.285353] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.143 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:18:38.143 ************************************ 00:18:38.143 END TEST raid_superblock_test_md_interleaved 00:18:38.143 ************************************ 00:18:38.143 00:18:38.143 real 0m13.332s 00:18:38.143 user 0m23.732s 00:18:38.143 sys 0m2.117s 00:18:38.143 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.143 18:32:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.143 18:32:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:38.143 18:32:30 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:38.143 18:32:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:38.143 18:32:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.143 18:32:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.402 ************************************ 00:18:38.402 START TEST raid_rebuild_test_sb_md_interleaved 00:18:38.402 ************************************ 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=67593 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 67593 /var/tmp/spdk-raid.sock 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67593 ']' 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.402 18:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.402 [2024-07-15 18:32:30.566609] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:18:38.402 [2024-07-15 18:32:30.566843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:38.402 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:38.402 Zero copy mechanism will not be used. 00:18:38.969 EAL: TSC is not safe to use in SMP mode 00:18:38.969 EAL: TSC is not invariant 00:18:38.969 [2024-07-15 18:32:31.169976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.969 [2024-07-15 18:32:31.276692] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:38.969 [2024-07-15 18:32:31.278804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.969 [2024-07-15 18:32:31.279589] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.969 [2024-07-15 18:32:31.279600] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.296 18:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.296 18:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:18:39.296 18:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:39.296 18:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:39.553 BaseBdev1_malloc 00:18:39.553 18:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:39.809 [2024-07-15 18:32:32.147834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:39.809 [2024-07-15 18:32:32.147909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.809 [2024-07-15 18:32:32.148535] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50eb4c34780 00:18:39.809 [2024-07-15 18:32:32.148569] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.809 [2024-07-15 18:32:32.149214] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.809 [2024-07-15 18:32:32.149241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.809 BaseBdev1 00:18:39.809 18:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:39.809 18:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:40.066 BaseBdev2_malloc 00:18:40.066 18:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:40.324 [2024-07-15 18:32:32.683869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:40.324 [2024-07-15 18:32:32.683925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.324 [2024-07-15 18:32:32.683953] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50eb4c34c80 00:18:40.324 [2024-07-15 18:32:32.683962] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.324 [2024-07-15 18:32:32.684605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.324 [2024-07-15 18:32:32.684627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:40.324 BaseBdev2 00:18:40.324 18:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:40.582 spare_malloc 00:18:40.582 18:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:40.839 spare_delay 00:18:40.839 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:41.096 [2024-07-15 18:32:33.455925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:41.096 [2024-07-15 18:32:33.455985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.096 [2024-07-15 18:32:33.456009] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50eb4c35400 00:18:41.096 [2024-07-15 18:32:33.456017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.096 [2024-07-15 18:32:33.456645] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.096 [2024-07-15 18:32:33.456670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:41.096 spare 00:18:41.096 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:41.353 [2024-07-15 18:32:33.687967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.353 [2024-07-15 18:32:33.688608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.353 [2024-07-15 18:32:33.688669] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x50eb4c35680 00:18:41.353 [2024-07-15 18:32:33.688675] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:41.353 [2024-07-15 18:32:33.688707] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50eb4c97e20 00:18:41.353 [2024-07-15 18:32:33.688722] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x50eb4c35680 00:18:41.353 [2024-07-15 18:32:33.688725] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x50eb4c35680 00:18:41.353 [2024-07-15 18:32:33.688738] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.353 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.610 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.610 "name": "raid_bdev1", 00:18:41.610 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:41.610 "strip_size_kb": 0, 00:18:41.610 "state": "online", 00:18:41.610 "raid_level": "raid1", 00:18:41.610 "superblock": true, 00:18:41.610 "num_base_bdevs": 2, 00:18:41.610 "num_base_bdevs_discovered": 2, 00:18:41.610 "num_base_bdevs_operational": 2, 00:18:41.610 "base_bdevs_list": [ 00:18:41.610 { 00:18:41.610 "name": "BaseBdev1", 00:18:41.610 "uuid": "1389b234-03f0-cd55-98e7-e5e76158c644", 00:18:41.610 "is_configured": true, 00:18:41.610 "data_offset": 256, 00:18:41.610 "data_size": 7936 00:18:41.610 }, 00:18:41.610 { 00:18:41.610 "name": "BaseBdev2", 00:18:41.610 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:41.610 "is_configured": true, 00:18:41.610 "data_offset": 256, 00:18:41.610 "data_size": 7936 00:18:41.610 } 00:18:41.610 ] 00:18:41.610 }' 00:18:41.610 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.610 18:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.173 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:42.173 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:42.431 [2024-07-15 18:32:34.608071] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.431 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:18:42.431 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.431 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:42.689 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:18:42.689 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:42.689 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:18:42.689 18:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:42.947 [2024-07-15 18:32:35.116054] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.947 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.204 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.204 "name": "raid_bdev1", 00:18:43.204 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:43.204 "strip_size_kb": 0, 00:18:43.204 "state": "online", 00:18:43.204 "raid_level": "raid1", 00:18:43.204 "superblock": true, 00:18:43.204 "num_base_bdevs": 2, 00:18:43.204 "num_base_bdevs_discovered": 1, 00:18:43.204 "num_base_bdevs_operational": 1, 00:18:43.204 "base_bdevs_list": [ 00:18:43.204 { 00:18:43.204 "name": null, 00:18:43.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.204 "is_configured": false, 00:18:43.204 "data_offset": 256, 00:18:43.204 "data_size": 7936 00:18:43.204 }, 00:18:43.204 { 00:18:43.204 "name": "BaseBdev2", 00:18:43.204 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:43.204 "is_configured": true, 00:18:43.204 "data_offset": 256, 00:18:43.204 "data_size": 7936 00:18:43.204 } 00:18:43.204 ] 00:18:43.204 }' 00:18:43.204 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.204 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.462 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.719 [2024-07-15 18:32:35.944146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.719 [2024-07-15 18:32:35.944423] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50eb4c97ec0 00:18:43.719 [2024-07-15 18:32:35.945334] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.719 18:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:44.652 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.652 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:44.652 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:44.652 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:44.652 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:44.652 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.652 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.909 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:44.909 "name": "raid_bdev1", 00:18:44.909 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:44.909 "strip_size_kb": 0, 00:18:44.909 "state": "online", 00:18:44.909 "raid_level": "raid1", 00:18:44.909 "superblock": true, 00:18:44.909 "num_base_bdevs": 2, 00:18:44.909 "num_base_bdevs_discovered": 2, 00:18:44.909 "num_base_bdevs_operational": 2, 00:18:44.909 "process": { 00:18:44.909 "type": "rebuild", 00:18:44.909 "target": "spare", 00:18:44.909 "progress": { 00:18:44.909 "blocks": 3328, 00:18:44.909 "percent": 41 00:18:44.909 } 00:18:44.909 }, 00:18:44.909 "base_bdevs_list": [ 00:18:44.909 { 00:18:44.909 "name": "spare", 00:18:44.909 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:44.909 "is_configured": true, 00:18:44.909 "data_offset": 256, 00:18:44.909 "data_size": 7936 00:18:44.909 }, 00:18:44.909 { 00:18:44.909 "name": "BaseBdev2", 00:18:44.909 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:44.909 "is_configured": true, 00:18:44.909 "data_offset": 256, 00:18:44.909 "data_size": 7936 00:18:44.909 } 00:18:44.910 ] 00:18:44.910 }' 00:18:44.910 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:44.910 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.910 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:44.910 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.910 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:45.167 [2024-07-15 18:32:37.566122] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.425 [2024-07-15 18:32:37.656073] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:45.425 [2024-07-15 18:32:37.656177] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.425 [2024-07-15 18:32:37.656199] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.425 [2024-07-15 18:32:37.656204] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:45.425 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:45.425 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:45.425 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.426 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.683 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.683 "name": "raid_bdev1", 00:18:45.683 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:45.683 "strip_size_kb": 0, 00:18:45.683 "state": "online", 00:18:45.683 "raid_level": "raid1", 00:18:45.684 "superblock": true, 00:18:45.684 "num_base_bdevs": 2, 00:18:45.684 "num_base_bdevs_discovered": 1, 00:18:45.684 "num_base_bdevs_operational": 1, 00:18:45.684 "base_bdevs_list": [ 00:18:45.684 { 00:18:45.684 "name": null, 00:18:45.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.684 "is_configured": false, 00:18:45.684 "data_offset": 256, 00:18:45.684 "data_size": 7936 00:18:45.684 }, 00:18:45.684 { 00:18:45.684 "name": "BaseBdev2", 00:18:45.684 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:45.684 "is_configured": true, 00:18:45.684 "data_offset": 256, 00:18:45.684 "data_size": 7936 00:18:45.684 } 00:18:45.684 ] 00:18:45.684 }' 00:18:45.684 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.684 18:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.942 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.942 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:45.942 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:45.942 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:45.942 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:45.942 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.942 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.200 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.200 "name": "raid_bdev1", 00:18:46.200 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:46.200 "strip_size_kb": 0, 00:18:46.200 "state": "online", 00:18:46.200 "raid_level": "raid1", 00:18:46.200 "superblock": true, 00:18:46.200 "num_base_bdevs": 2, 00:18:46.200 "num_base_bdevs_discovered": 1, 00:18:46.200 "num_base_bdevs_operational": 1, 00:18:46.200 "base_bdevs_list": [ 00:18:46.200 { 00:18:46.200 "name": null, 00:18:46.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.200 "is_configured": false, 00:18:46.200 "data_offset": 256, 00:18:46.200 "data_size": 7936 00:18:46.200 }, 00:18:46.200 { 00:18:46.200 "name": "BaseBdev2", 00:18:46.200 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:46.200 "is_configured": true, 00:18:46.200 "data_offset": 256, 00:18:46.200 "data_size": 7936 00:18:46.200 } 00:18:46.200 ] 00:18:46.200 }' 00:18:46.200 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:46.459 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:46.459 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:46.459 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:46.459 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:46.459 [2024-07-15 18:32:38.840066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.459 [2024-07-15 18:32:38.840346] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50eb4c97e20 00:18:46.459 [2024-07-15 18:32:38.841308] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:46.459 18:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:47.834 18:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.834 18:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:47.834 18:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:47.834 18:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:47.834 18:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:47.834 18:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.834 18:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.834 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.834 "name": "raid_bdev1", 00:18:47.834 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:47.834 "strip_size_kb": 0, 00:18:47.834 "state": "online", 00:18:47.834 "raid_level": "raid1", 00:18:47.834 "superblock": true, 00:18:47.834 "num_base_bdevs": 2, 00:18:47.834 "num_base_bdevs_discovered": 2, 00:18:47.834 "num_base_bdevs_operational": 2, 00:18:47.834 "process": { 00:18:47.834 "type": "rebuild", 00:18:47.834 "target": "spare", 00:18:47.834 "progress": { 00:18:47.834 "blocks": 3328, 00:18:47.835 "percent": 41 00:18:47.835 } 00:18:47.835 }, 00:18:47.835 "base_bdevs_list": [ 00:18:47.835 { 00:18:47.835 "name": "spare", 00:18:47.835 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:47.835 "is_configured": true, 00:18:47.835 "data_offset": 256, 00:18:47.835 "data_size": 7936 00:18:47.835 }, 00:18:47.835 { 00:18:47.835 "name": "BaseBdev2", 00:18:47.835 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:47.835 "is_configured": true, 00:18:47.835 "data_offset": 256, 00:18:47.835 "data_size": 7936 00:18:47.835 } 00:18:47.835 ] 00:18:47.835 }' 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:18:47.835 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=732 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.835 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.092 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:48.092 "name": "raid_bdev1", 00:18:48.092 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:48.092 "strip_size_kb": 0, 00:18:48.092 "state": "online", 00:18:48.092 "raid_level": "raid1", 00:18:48.092 "superblock": true, 00:18:48.092 "num_base_bdevs": 2, 00:18:48.092 "num_base_bdevs_discovered": 2, 00:18:48.092 "num_base_bdevs_operational": 2, 00:18:48.092 "process": { 00:18:48.093 "type": "rebuild", 00:18:48.093 "target": "spare", 00:18:48.093 "progress": { 00:18:48.093 "blocks": 3840, 00:18:48.093 "percent": 48 00:18:48.093 } 00:18:48.093 }, 00:18:48.093 "base_bdevs_list": [ 00:18:48.093 { 00:18:48.093 "name": "spare", 00:18:48.093 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:48.093 "is_configured": true, 00:18:48.093 "data_offset": 256, 00:18:48.093 "data_size": 7936 00:18:48.093 }, 00:18:48.093 { 00:18:48.093 "name": "BaseBdev2", 00:18:48.093 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:48.093 "is_configured": true, 00:18:48.093 "data_offset": 256, 00:18:48.093 "data_size": 7936 00:18:48.093 } 00:18:48.093 ] 00:18:48.093 }' 00:18:48.093 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:48.093 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.093 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:48.093 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.093 18:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.467 "name": "raid_bdev1", 00:18:49.467 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:49.467 "strip_size_kb": 0, 00:18:49.467 "state": "online", 00:18:49.467 "raid_level": "raid1", 00:18:49.467 "superblock": true, 00:18:49.467 "num_base_bdevs": 2, 00:18:49.467 "num_base_bdevs_discovered": 2, 00:18:49.467 "num_base_bdevs_operational": 2, 00:18:49.467 "process": { 00:18:49.467 "type": "rebuild", 00:18:49.467 "target": "spare", 00:18:49.467 "progress": { 00:18:49.467 "blocks": 7424, 00:18:49.467 "percent": 93 00:18:49.467 } 00:18:49.467 }, 00:18:49.467 "base_bdevs_list": [ 00:18:49.467 { 00:18:49.467 "name": "spare", 00:18:49.467 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:49.467 "is_configured": true, 00:18:49.467 "data_offset": 256, 00:18:49.467 "data_size": 7936 00:18:49.467 }, 00:18:49.467 { 00:18:49.467 "name": "BaseBdev2", 00:18:49.467 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:49.467 "is_configured": true, 00:18:49.467 "data_offset": 256, 00:18:49.467 "data_size": 7936 00:18:49.467 } 00:18:49.467 ] 00:18:49.467 }' 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.467 18:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:49.726 [2024-07-15 18:32:41.961189] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:49.726 [2024-07-15 18:32:41.961237] bdev_raid.c:2506:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:49.726 [2024-07-15 18:32:41.961296] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.663 18:32:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:50.663 18:32:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.663 18:32:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:50.663 18:32:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:50.663 18:32:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:50.663 18:32:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:50.663 18:32:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.663 18:32:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:50.922 "name": "raid_bdev1", 00:18:50.922 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:50.922 "strip_size_kb": 0, 00:18:50.922 "state": "online", 00:18:50.922 "raid_level": "raid1", 00:18:50.922 "superblock": true, 00:18:50.922 "num_base_bdevs": 2, 00:18:50.922 "num_base_bdevs_discovered": 2, 00:18:50.922 "num_base_bdevs_operational": 2, 00:18:50.922 "base_bdevs_list": [ 00:18:50.922 { 00:18:50.922 "name": "spare", 00:18:50.922 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:50.922 "is_configured": true, 00:18:50.922 "data_offset": 256, 00:18:50.922 "data_size": 7936 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "name": "BaseBdev2", 00:18:50.922 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:50.922 "is_configured": true, 00:18:50.922 "data_offset": 256, 00:18:50.922 "data_size": 7936 00:18:50.922 } 00:18:50.922 ] 00:18:50.922 }' 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.922 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:51.182 "name": "raid_bdev1", 00:18:51.182 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:51.182 "strip_size_kb": 0, 00:18:51.182 "state": "online", 00:18:51.182 "raid_level": "raid1", 00:18:51.182 "superblock": true, 00:18:51.182 "num_base_bdevs": 2, 00:18:51.182 "num_base_bdevs_discovered": 2, 00:18:51.182 "num_base_bdevs_operational": 2, 00:18:51.182 "base_bdevs_list": [ 00:18:51.182 { 00:18:51.182 "name": "spare", 00:18:51.182 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:51.182 "is_configured": true, 00:18:51.182 "data_offset": 256, 00:18:51.182 "data_size": 7936 00:18:51.182 }, 00:18:51.182 { 00:18:51.182 "name": "BaseBdev2", 00:18:51.182 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:51.182 "is_configured": true, 00:18:51.182 "data_offset": 256, 00:18:51.182 "data_size": 7936 00:18:51.182 } 00:18:51.182 ] 00:18:51.182 }' 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.182 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.441 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.441 "name": "raid_bdev1", 00:18:51.441 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:51.441 "strip_size_kb": 0, 00:18:51.441 "state": "online", 00:18:51.441 "raid_level": "raid1", 00:18:51.441 "superblock": true, 00:18:51.441 "num_base_bdevs": 2, 00:18:51.441 "num_base_bdevs_discovered": 2, 00:18:51.441 "num_base_bdevs_operational": 2, 00:18:51.441 "base_bdevs_list": [ 00:18:51.441 { 00:18:51.441 "name": "spare", 00:18:51.441 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:51.441 "is_configured": true, 00:18:51.441 "data_offset": 256, 00:18:51.441 "data_size": 7936 00:18:51.441 }, 00:18:51.441 { 00:18:51.441 "name": "BaseBdev2", 00:18:51.441 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:51.441 "is_configured": true, 00:18:51.441 "data_offset": 256, 00:18:51.441 "data_size": 7936 00:18:51.441 } 00:18:51.441 ] 00:18:51.441 }' 00:18:51.441 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.441 18:32:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.007 18:32:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:52.007 [2024-07-15 18:32:44.373445] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.007 [2024-07-15 18:32:44.373468] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.007 [2024-07-15 18:32:44.373490] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.007 [2024-07-15 18:32:44.373505] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.007 [2024-07-15 18:32:44.373509] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x50eb4c35680 name raid_bdev1, state offline 00:18:52.007 18:32:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.007 18:32:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:18:52.574 18:32:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:52.574 18:32:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:18:52.574 18:32:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:18:52.574 18:32:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:52.574 18:32:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:52.832 [2024-07-15 18:32:45.161504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:52.832 [2024-07-15 18:32:45.161556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.832 [2024-07-15 18:32:45.161585] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50eb4c35400 00:18:52.832 [2024-07-15 18:32:45.161594] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.832 [2024-07-15 18:32:45.162204] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.832 [2024-07-15 18:32:45.162232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:52.832 [2024-07-15 18:32:45.162252] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:52.832 [2024-07-15 18:32:45.162265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.832 [2024-07-15 18:32:45.162300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.832 spare 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.832 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.091 [2024-07-15 18:32:45.262327] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x50eb4c35680 00:18:53.091 [2024-07-15 18:32:45.262350] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:53.091 [2024-07-15 18:32:45.262405] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50eb4c97e20 00:18:53.091 [2024-07-15 18:32:45.262435] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x50eb4c35680 00:18:53.091 [2024-07-15 18:32:45.262439] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x50eb4c35680 00:18:53.091 [2024-07-15 18:32:45.262455] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.091 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.091 "name": "raid_bdev1", 00:18:53.091 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:53.091 "strip_size_kb": 0, 00:18:53.091 "state": "online", 00:18:53.091 "raid_level": "raid1", 00:18:53.091 "superblock": true, 00:18:53.091 "num_base_bdevs": 2, 00:18:53.091 "num_base_bdevs_discovered": 2, 00:18:53.091 "num_base_bdevs_operational": 2, 00:18:53.091 "base_bdevs_list": [ 00:18:53.091 { 00:18:53.091 "name": "spare", 00:18:53.091 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:53.091 "is_configured": true, 00:18:53.091 "data_offset": 256, 00:18:53.091 "data_size": 7936 00:18:53.091 }, 00:18:53.091 { 00:18:53.091 "name": "BaseBdev2", 00:18:53.091 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:53.091 "is_configured": true, 00:18:53.091 "data_offset": 256, 00:18:53.091 "data_size": 7936 00:18:53.091 } 00:18:53.091 ] 00:18:53.091 }' 00:18:53.091 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.091 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.350 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.350 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:53.350 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:53.350 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:53.350 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:53.350 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.350 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.609 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:53.609 "name": "raid_bdev1", 00:18:53.609 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:53.609 "strip_size_kb": 0, 00:18:53.609 "state": "online", 00:18:53.609 "raid_level": "raid1", 00:18:53.609 "superblock": true, 00:18:53.609 "num_base_bdevs": 2, 00:18:53.609 "num_base_bdevs_discovered": 2, 00:18:53.609 "num_base_bdevs_operational": 2, 00:18:53.609 "base_bdevs_list": [ 00:18:53.609 { 00:18:53.609 "name": "spare", 00:18:53.609 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:53.609 "is_configured": true, 00:18:53.609 "data_offset": 256, 00:18:53.609 "data_size": 7936 00:18:53.609 }, 00:18:53.609 { 00:18:53.609 "name": "BaseBdev2", 00:18:53.609 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:53.609 "is_configured": true, 00:18:53.609 "data_offset": 256, 00:18:53.609 "data_size": 7936 00:18:53.609 } 00:18:53.609 ] 00:18:53.609 }' 00:18:53.609 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:53.609 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:53.609 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:53.609 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:53.609 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.609 18:32:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:53.869 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.869 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:54.149 [2024-07-15 18:32:46.473666] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.149 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.408 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.408 "name": "raid_bdev1", 00:18:54.408 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:54.408 "strip_size_kb": 0, 00:18:54.408 "state": "online", 00:18:54.408 "raid_level": "raid1", 00:18:54.408 "superblock": true, 00:18:54.408 "num_base_bdevs": 2, 00:18:54.408 "num_base_bdevs_discovered": 1, 00:18:54.408 "num_base_bdevs_operational": 1, 00:18:54.408 "base_bdevs_list": [ 00:18:54.408 { 00:18:54.408 "name": null, 00:18:54.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.408 "is_configured": false, 00:18:54.408 "data_offset": 256, 00:18:54.408 "data_size": 7936 00:18:54.408 }, 00:18:54.408 { 00:18:54.408 "name": "BaseBdev2", 00:18:54.408 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:54.408 "is_configured": true, 00:18:54.408 "data_offset": 256, 00:18:54.408 "data_size": 7936 00:18:54.408 } 00:18:54.408 ] 00:18:54.408 }' 00:18:54.408 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.408 18:32:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.977 18:32:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:55.236 [2024-07-15 18:32:47.393757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.236 [2024-07-15 18:32:47.393828] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.236 [2024-07-15 18:32:47.393834] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:55.236 [2024-07-15 18:32:47.393870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.236 [2024-07-15 18:32:47.394082] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50eb4c97ec0 00:18:55.236 [2024-07-15 18:32:47.394732] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:55.236 18:32:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:18:56.172 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.172 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:56.172 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:56.172 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:56.172 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:56.172 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.172 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.431 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:56.431 "name": "raid_bdev1", 00:18:56.431 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:56.431 "strip_size_kb": 0, 00:18:56.431 "state": "online", 00:18:56.431 "raid_level": "raid1", 00:18:56.431 "superblock": true, 00:18:56.431 "num_base_bdevs": 2, 00:18:56.431 "num_base_bdevs_discovered": 2, 00:18:56.431 "num_base_bdevs_operational": 2, 00:18:56.431 "process": { 00:18:56.431 "type": "rebuild", 00:18:56.431 "target": "spare", 00:18:56.431 "progress": { 00:18:56.431 "blocks": 3328, 00:18:56.431 "percent": 41 00:18:56.431 } 00:18:56.431 }, 00:18:56.431 "base_bdevs_list": [ 00:18:56.431 { 00:18:56.431 "name": "spare", 00:18:56.431 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:56.431 "is_configured": true, 00:18:56.431 "data_offset": 256, 00:18:56.431 "data_size": 7936 00:18:56.431 }, 00:18:56.431 { 00:18:56.431 "name": "BaseBdev2", 00:18:56.431 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:56.431 "is_configured": true, 00:18:56.431 "data_offset": 256, 00:18:56.431 "data_size": 7936 00:18:56.431 } 00:18:56.431 ] 00:18:56.431 }' 00:18:56.431 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:56.431 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.431 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:56.431 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.431 18:32:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:56.689 [2024-07-15 18:32:49.018825] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.948 [2024-07-15 18:32:49.105286] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:56.948 [2024-07-15 18:32:49.105360] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.948 [2024-07-15 18:32:49.105383] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.948 [2024-07-15 18:32:49.105387] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:56.948 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.948 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:56.948 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:56.948 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:56.949 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:56.949 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:56.949 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.949 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.949 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.949 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.949 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.949 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.207 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.207 "name": "raid_bdev1", 00:18:57.207 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:57.207 "strip_size_kb": 0, 00:18:57.207 "state": "online", 00:18:57.207 "raid_level": "raid1", 00:18:57.207 "superblock": true, 00:18:57.207 "num_base_bdevs": 2, 00:18:57.207 "num_base_bdevs_discovered": 1, 00:18:57.207 "num_base_bdevs_operational": 1, 00:18:57.207 "base_bdevs_list": [ 00:18:57.207 { 00:18:57.207 "name": null, 00:18:57.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.207 "is_configured": false, 00:18:57.207 "data_offset": 256, 00:18:57.207 "data_size": 7936 00:18:57.207 }, 00:18:57.207 { 00:18:57.207 "name": "BaseBdev2", 00:18:57.207 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:57.207 "is_configured": true, 00:18:57.207 "data_offset": 256, 00:18:57.207 "data_size": 7936 00:18:57.207 } 00:18:57.207 ] 00:18:57.207 }' 00:18:57.207 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.207 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.466 18:32:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:57.776 [2024-07-15 18:32:50.057480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:57.776 [2024-07-15 18:32:50.057535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.776 [2024-07-15 18:32:50.057564] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50eb4c35400 00:18:57.776 [2024-07-15 18:32:50.057572] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.776 [2024-07-15 18:32:50.057642] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.776 [2024-07-15 18:32:50.057652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:57.776 [2024-07-15 18:32:50.057671] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:57.776 [2024-07-15 18:32:50.057676] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:57.776 [2024-07-15 18:32:50.057680] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:57.776 [2024-07-15 18:32:50.057692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.776 [2024-07-15 18:32:50.057903] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50eb4c97e20 00:18:57.776 [2024-07-15 18:32:50.058551] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.776 spare 00:18:57.776 18:32:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.154 "name": "raid_bdev1", 00:18:59.154 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:59.154 "strip_size_kb": 0, 00:18:59.154 "state": "online", 00:18:59.154 "raid_level": "raid1", 00:18:59.154 "superblock": true, 00:18:59.154 "num_base_bdevs": 2, 00:18:59.154 "num_base_bdevs_discovered": 2, 00:18:59.154 "num_base_bdevs_operational": 2, 00:18:59.154 "process": { 00:18:59.154 "type": "rebuild", 00:18:59.154 "target": "spare", 00:18:59.154 "progress": { 00:18:59.154 "blocks": 3328, 00:18:59.154 "percent": 41 00:18:59.154 } 00:18:59.154 }, 00:18:59.154 "base_bdevs_list": [ 00:18:59.154 { 00:18:59.154 "name": "spare", 00:18:59.154 "uuid": "c9b8dbeb-2dc9-cd5e-924d-f0b04fdc1f89", 00:18:59.154 "is_configured": true, 00:18:59.154 "data_offset": 256, 00:18:59.154 "data_size": 7936 00:18:59.154 }, 00:18:59.154 { 00:18:59.154 "name": "BaseBdev2", 00:18:59.154 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:59.154 "is_configured": true, 00:18:59.154 "data_offset": 256, 00:18:59.154 "data_size": 7936 00:18:59.154 } 00:18:59.154 ] 00:18:59.154 }' 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.154 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:59.413 [2024-07-15 18:32:51.742214] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.413 [2024-07-15 18:32:51.769284] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:59.413 [2024-07-15 18:32:51.769357] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.413 [2024-07-15 18:32:51.769379] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.413 [2024-07-15 18:32:51.769382] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.413 18:32:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.980 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.980 "name": "raid_bdev1", 00:18:59.980 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:18:59.980 "strip_size_kb": 0, 00:18:59.980 "state": "online", 00:18:59.980 "raid_level": "raid1", 00:18:59.980 "superblock": true, 00:18:59.980 "num_base_bdevs": 2, 00:18:59.980 "num_base_bdevs_discovered": 1, 00:18:59.980 "num_base_bdevs_operational": 1, 00:18:59.980 "base_bdevs_list": [ 00:18:59.980 { 00:18:59.980 "name": null, 00:18:59.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.980 "is_configured": false, 00:18:59.980 "data_offset": 256, 00:18:59.980 "data_size": 7936 00:18:59.980 }, 00:18:59.980 { 00:18:59.980 "name": "BaseBdev2", 00:18:59.980 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:18:59.980 "is_configured": true, 00:18:59.980 "data_offset": 256, 00:18:59.980 "data_size": 7936 00:18:59.980 } 00:18:59.980 ] 00:18:59.981 }' 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.981 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.548 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.548 "name": "raid_bdev1", 00:19:00.548 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:19:00.548 "strip_size_kb": 0, 00:19:00.548 "state": "online", 00:19:00.548 "raid_level": "raid1", 00:19:00.548 "superblock": true, 00:19:00.548 "num_base_bdevs": 2, 00:19:00.548 "num_base_bdevs_discovered": 1, 00:19:00.548 "num_base_bdevs_operational": 1, 00:19:00.548 "base_bdevs_list": [ 00:19:00.548 { 00:19:00.548 "name": null, 00:19:00.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.548 "is_configured": false, 00:19:00.548 "data_offset": 256, 00:19:00.548 "data_size": 7936 00:19:00.548 }, 00:19:00.548 { 00:19:00.548 "name": "BaseBdev2", 00:19:00.548 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:19:00.548 "is_configured": true, 00:19:00.548 "data_offset": 256, 00:19:00.548 "data_size": 7936 00:19:00.548 } 00:19:00.548 ] 00:19:00.548 }' 00:19:00.548 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:00.548 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:00.548 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:00.548 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:00.548 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:00.548 18:32:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:00.807 [2024-07-15 18:32:53.209433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:00.807 [2024-07-15 18:32:53.209491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.807 [2024-07-15 18:32:53.209519] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50eb4c34780 00:19:00.807 [2024-07-15 18:32:53.209527] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.807 [2024-07-15 18:32:53.209585] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.807 [2024-07-15 18:32:53.209595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:00.807 [2024-07-15 18:32:53.209614] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:00.807 [2024-07-15 18:32:53.209620] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:00.807 [2024-07-15 18:32:53.209623] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:01.075 BaseBdev1 00:19:01.075 18:32:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.010 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.268 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.268 "name": "raid_bdev1", 00:19:02.268 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:19:02.268 "strip_size_kb": 0, 00:19:02.268 "state": "online", 00:19:02.268 "raid_level": "raid1", 00:19:02.268 "superblock": true, 00:19:02.268 "num_base_bdevs": 2, 00:19:02.268 "num_base_bdevs_discovered": 1, 00:19:02.268 "num_base_bdevs_operational": 1, 00:19:02.268 "base_bdevs_list": [ 00:19:02.268 { 00:19:02.268 "name": null, 00:19:02.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.268 "is_configured": false, 00:19:02.268 "data_offset": 256, 00:19:02.268 "data_size": 7936 00:19:02.268 }, 00:19:02.268 { 00:19:02.268 "name": "BaseBdev2", 00:19:02.268 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:19:02.268 "is_configured": true, 00:19:02.268 "data_offset": 256, 00:19:02.268 "data_size": 7936 00:19:02.268 } 00:19:02.268 ] 00:19:02.268 }' 00:19:02.268 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.268 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.834 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.834 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:02.835 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:02.835 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:02.835 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:02.835 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.835 18:32:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.835 "name": "raid_bdev1", 00:19:02.835 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:19:02.835 "strip_size_kb": 0, 00:19:02.835 "state": "online", 00:19:02.835 "raid_level": "raid1", 00:19:02.835 "superblock": true, 00:19:02.835 "num_base_bdevs": 2, 00:19:02.835 "num_base_bdevs_discovered": 1, 00:19:02.835 "num_base_bdevs_operational": 1, 00:19:02.835 "base_bdevs_list": [ 00:19:02.835 { 00:19:02.835 "name": null, 00:19:02.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.835 "is_configured": false, 00:19:02.835 "data_offset": 256, 00:19:02.835 "data_size": 7936 00:19:02.835 }, 00:19:02.835 { 00:19:02.835 "name": "BaseBdev2", 00:19:02.835 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:19:02.835 "is_configured": true, 00:19:02.835 "data_offset": 256, 00:19:02.835 "data_size": 7936 00:19:02.835 } 00:19:02.835 ] 00:19:02.835 }' 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:02.835 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.400 [2024-07-15 18:32:55.505710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.400 [2024-07-15 18:32:55.505790] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.400 [2024-07-15 18:32:55.505797] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:03.400 request: 00:19:03.400 { 00:19:03.400 "base_bdev": "BaseBdev1", 00:19:03.400 "raid_bdev": "raid_bdev1", 00:19:03.400 "method": "bdev_raid_add_base_bdev", 00:19:03.400 "req_id": 1 00:19:03.400 } 00:19:03.400 Got JSON-RPC error response 00:19:03.400 response: 00:19:03.400 { 00:19:03.400 "code": -22, 00:19:03.400 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:03.400 } 00:19:03.400 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:19:03.400 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:03.400 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:03.400 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:03.400 18:32:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.334 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.592 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.592 "name": "raid_bdev1", 00:19:04.592 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:19:04.592 "strip_size_kb": 0, 00:19:04.592 "state": "online", 00:19:04.592 "raid_level": "raid1", 00:19:04.592 "superblock": true, 00:19:04.592 "num_base_bdevs": 2, 00:19:04.592 "num_base_bdevs_discovered": 1, 00:19:04.592 "num_base_bdevs_operational": 1, 00:19:04.592 "base_bdevs_list": [ 00:19:04.592 { 00:19:04.592 "name": null, 00:19:04.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.592 "is_configured": false, 00:19:04.592 "data_offset": 256, 00:19:04.592 "data_size": 7936 00:19:04.592 }, 00:19:04.592 { 00:19:04.592 "name": "BaseBdev2", 00:19:04.592 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:19:04.592 "is_configured": true, 00:19:04.592 "data_offset": 256, 00:19:04.592 "data_size": 7936 00:19:04.592 } 00:19:04.592 ] 00:19:04.592 }' 00:19:04.592 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.592 18:32:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.158 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.158 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:05.158 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:05.158 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:05.158 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:05.158 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.158 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.417 "name": "raid_bdev1", 00:19:05.417 "uuid": "9a0fefc8-42d8-11ef-9ade-d5fc5159efa5", 00:19:05.417 "strip_size_kb": 0, 00:19:05.417 "state": "online", 00:19:05.417 "raid_level": "raid1", 00:19:05.417 "superblock": true, 00:19:05.417 "num_base_bdevs": 2, 00:19:05.417 "num_base_bdevs_discovered": 1, 00:19:05.417 "num_base_bdevs_operational": 1, 00:19:05.417 "base_bdevs_list": [ 00:19:05.417 { 00:19:05.417 "name": null, 00:19:05.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.417 "is_configured": false, 00:19:05.417 "data_offset": 256, 00:19:05.417 "data_size": 7936 00:19:05.417 }, 00:19:05.417 { 00:19:05.417 "name": "BaseBdev2", 00:19:05.417 "uuid": "70a2c16c-bf21-465d-a2f9-7f276d3c69fc", 00:19:05.417 "is_configured": true, 00:19:05.417 "data_offset": 256, 00:19:05.417 "data_size": 7936 00:19:05.417 } 00:19:05.417 ] 00:19:05.417 }' 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 67593 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67593 ']' 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67593 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67593 00:19:05.417 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:19:05.418 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:19:05.418 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:19:05.418 killing process with pid 67593 00:19:05.418 Received shutdown signal, test time was about 60.000000 seconds 00:19:05.418 00:19:05.418 Latency(us) 00:19:05.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.418 =================================================================================================================== 00:19:05.418 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.418 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67593' 00:19:05.418 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 67593 00:19:05.418 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 67593 00:19:05.418 [2024-07-15 18:32:57.603774] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.418 [2024-07-15 18:32:57.603807] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.418 [2024-07-15 18:32:57.603820] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.418 [2024-07-15 18:32:57.603824] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x50eb4c35680 name raid_bdev1, state offline 00:19:05.418 [2024-07-15 18:32:57.626180] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.676 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:19:05.676 00:19:05.676 real 0m27.291s 00:19:05.676 user 0m42.350s 00:19:05.676 sys 0m2.714s 00:19:05.676 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:05.676 18:32:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.676 ************************************ 00:19:05.676 END TEST raid_rebuild_test_sb_md_interleaved 00:19:05.676 ************************************ 00:19:05.676 18:32:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:05.676 18:32:57 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:19:05.676 18:32:57 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:19:05.676 18:32:57 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 67593 ']' 00:19:05.676 18:32:57 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 67593 00:19:05.676 18:32:57 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:19:05.676 00:19:05.676 real 11m59.457s 00:19:05.676 user 20m53.858s 00:19:05.676 sys 1m51.922s 00:19:05.676 18:32:57 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:05.676 18:32:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.676 ************************************ 00:19:05.676 END TEST bdev_raid 00:19:05.676 ************************************ 00:19:05.676 18:32:57 -- common/autotest_common.sh@1142 -- # return 0 00:19:05.676 18:32:57 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:19:05.676 18:32:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:05.676 18:32:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.676 18:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:05.676 ************************************ 00:19:05.676 START TEST bdevperf_config 00:19:05.676 ************************************ 00:19:05.676 18:32:57 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:19:05.936 * Looking for test storage... 00:19:05.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:19:05.936 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:19:05.936 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:19:05.936 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:19:05.936 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:19:05.936 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:05.936 18:32:58 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:09.227 18:33:01 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-15 18:32:58.126168] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:09.227 [2024-07-15 18:32:58.126381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:09.227 Using job config with 4 jobs 00:19:09.227 EAL: TSC is not safe to use in SMP mode 00:19:09.227 EAL: TSC is not invariant 00:19:09.227 [2024-07-15 18:32:58.752079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.227 [2024-07-15 18:32:58.870832] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:09.227 [2024-07-15 18:32:58.873381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.227 cpumask for '\''job0'\'' is too big 00:19:09.227 cpumask for '\''job1'\'' is too big 00:19:09.227 cpumask for '\''job2'\'' is too big 00:19:09.227 cpumask for '\''job3'\'' is too big 00:19:09.227 Running I/O for 2 seconds... 00:19:09.227 00:19:09.227 Latency(us) 00:19:09.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.227 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.227 Malloc0 : 2.00 307463.94 300.26 0.00 0.00 832.33 260.65 1951.18 00:19:09.227 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.227 Malloc0 : 2.00 307492.60 300.29 0.00 0.00 832.02 264.38 1645.85 00:19:09.227 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.227 Malloc0 : 2.00 307535.85 300.33 0.00 0.00 831.66 253.21 1355.40 00:19:09.227 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.227 Malloc0 : 2.00 307605.18 300.40 0.00 0.00 831.24 133.12 1325.61 00:19:09.227 =================================================================================================================== 00:19:09.227 Total : 1230097.57 1201.27 0.00 0.00 831.81 133.12 1951.18' 00:19:09.227 18:33:01 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-15 18:32:58.126168] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:09.227 [2024-07-15 18:32:58.126381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:09.227 Using job config with 4 jobs 00:19:09.227 EAL: TSC is not safe to use in SMP mode 00:19:09.227 EAL: TSC is not invariant 00:19:09.227 [2024-07-15 18:32:58.752079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.227 [2024-07-15 18:32:58.870832] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:09.227 [2024-07-15 18:32:58.873381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.227 cpumask for '\''job0'\'' is too big 00:19:09.227 cpumask for '\''job1'\'' is too big 00:19:09.227 cpumask for '\''job2'\'' is too big 00:19:09.227 cpumask for '\''job3'\'' is too big 00:19:09.227 Running I/O for 2 seconds... 00:19:09.227 00:19:09.227 Latency(us) 00:19:09.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.227 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.227 Malloc0 : 2.00 307463.94 300.26 0.00 0.00 832.33 260.65 1951.18 00:19:09.228 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.228 Malloc0 : 2.00 307492.60 300.29 0.00 0.00 832.02 264.38 1645.85 00:19:09.228 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.228 Malloc0 : 2.00 307535.85 300.33 0.00 0.00 831.66 253.21 1355.40 00:19:09.228 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.228 Malloc0 : 2.00 307605.18 300.40 0.00 0.00 831.24 133.12 1325.61 00:19:09.228 =================================================================================================================== 00:19:09.228 Total : 1230097.57 1201.27 0.00 0.00 831.81 133.12 1951.18' 00:19:09.228 18:33:01 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 18:32:58.126168] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:09.228 [2024-07-15 18:32:58.126381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:09.228 Using job config with 4 jobs 00:19:09.228 EAL: TSC is not safe to use in SMP mode 00:19:09.228 EAL: TSC is not invariant 00:19:09.228 [2024-07-15 18:32:58.752079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.228 [2024-07-15 18:32:58.870832] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:09.228 [2024-07-15 18:32:58.873381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.228 cpumask for '\''job0'\'' is too big 00:19:09.228 cpumask for '\''job1'\'' is too big 00:19:09.228 cpumask for '\''job2'\'' is too big 00:19:09.228 cpumask for '\''job3'\'' is too big 00:19:09.228 Running I/O for 2 seconds... 00:19:09.228 00:19:09.228 Latency(us) 00:19:09.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.228 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.228 Malloc0 : 2.00 307463.94 300.26 0.00 0.00 832.33 260.65 1951.18 00:19:09.228 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.228 Malloc0 : 2.00 307492.60 300.29 0.00 0.00 832.02 264.38 1645.85 00:19:09.228 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.228 Malloc0 : 2.00 307535.85 300.33 0.00 0.00 831.66 253.21 1355.40 00:19:09.228 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:09.228 Malloc0 : 2.00 307605.18 300.40 0.00 0.00 831.24 133.12 1325.61 00:19:09.228 =================================================================================================================== 00:19:09.228 Total : 1230097.57 1201.27 0.00 0.00 831.81 133.12 1951.18' 00:19:09.228 18:33:01 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:19:09.228 18:33:01 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:19:09.228 18:33:01 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:19:09.228 18:33:01 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:09.228 [2024-07-15 18:33:01.171444] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:09.228 [2024-07-15 18:33:01.171716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:09.487 EAL: TSC is not safe to use in SMP mode 00:19:09.487 EAL: TSC is not invariant 00:19:09.487 [2024-07-15 18:33:01.784587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.746 [2024-07-15 18:33:01.893325] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:09.746 [2024-07-15 18:33:01.895476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.746 cpumask for 'job0' is too big 00:19:09.746 cpumask for 'job1' is too big 00:19:09.746 cpumask for 'job2' is too big 00:19:09.746 cpumask for 'job3' is too big 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:19:12.277 Running I/O for 2 seconds... 00:19:12.277 00:19:12.277 Latency(us) 00:19:12.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.277 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:12.277 Malloc0 : 2.00 311248.92 303.95 0.00 0.00 822.23 229.93 1541.58 00:19:12.277 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:12.277 Malloc0 : 2.00 311236.79 303.94 0.00 0.00 822.06 190.84 1362.85 00:19:12.277 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:12.277 Malloc0 : 2.00 311274.81 303.98 0.00 0.00 821.78 187.11 1422.43 00:19:12.277 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:12.277 Malloc0 : 2.00 311253.35 303.96 0.00 0.00 821.66 190.84 1400.09 00:19:12.277 =================================================================================================================== 00:19:12.277 Total : 1245013.88 1215.83 0.00 0.00 821.93 187.11 1541.58' 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:19:12.277 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:19:12.277 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:19:12.277 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:12.277 18:33:04 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-15 18:33:04.189942] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:14.811 [2024-07-15 18:33:04.190153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:14.811 Using job config with 3 jobs 00:19:14.811 EAL: TSC is not safe to use in SMP mode 00:19:14.811 EAL: TSC is not invariant 00:19:14.811 [2024-07-15 18:33:04.776764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.811 [2024-07-15 18:33:04.884440] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:14.811 [2024-07-15 18:33:04.886825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.811 cpumask for '\''job0'\'' is too big 00:19:14.811 cpumask for '\''job1'\'' is too big 00:19:14.811 cpumask for '\''job2'\'' is too big 00:19:14.811 Running I/O for 2 seconds... 00:19:14.811 00:19:14.811 Latency(us) 00:19:14.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380213.92 371.30 0.00 0.00 673.05 271.83 1310.72 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380199.86 371.29 0.00 0.00 672.85 210.39 1109.64 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380265.08 371.35 0.00 0.00 672.58 66.09 1117.09 00:19:14.811 =================================================================================================================== 00:19:14.811 Total : 1140678.85 1113.94 0.00 0.00 672.83 66.09 1310.72' 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-15 18:33:04.189942] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:14.811 [2024-07-15 18:33:04.190153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:14.811 Using job config with 3 jobs 00:19:14.811 EAL: TSC is not safe to use in SMP mode 00:19:14.811 EAL: TSC is not invariant 00:19:14.811 [2024-07-15 18:33:04.776764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.811 [2024-07-15 18:33:04.884440] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:14.811 [2024-07-15 18:33:04.886825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.811 cpumask for '\''job0'\'' is too big 00:19:14.811 cpumask for '\''job1'\'' is too big 00:19:14.811 cpumask for '\''job2'\'' is too big 00:19:14.811 Running I/O for 2 seconds... 00:19:14.811 00:19:14.811 Latency(us) 00:19:14.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380213.92 371.30 0.00 0.00 673.05 271.83 1310.72 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380199.86 371.29 0.00 0.00 672.85 210.39 1109.64 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380265.08 371.35 0.00 0.00 672.58 66.09 1117.09 00:19:14.811 =================================================================================================================== 00:19:14.811 Total : 1140678.85 1113.94 0.00 0.00 672.83 66.09 1310.72' 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 18:33:04.189942] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:14.811 [2024-07-15 18:33:04.190153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:14.811 Using job config with 3 jobs 00:19:14.811 EAL: TSC is not safe to use in SMP mode 00:19:14.811 EAL: TSC is not invariant 00:19:14.811 [2024-07-15 18:33:04.776764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.811 [2024-07-15 18:33:04.884440] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:14.811 [2024-07-15 18:33:04.886825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.811 cpumask for '\''job0'\'' is too big 00:19:14.811 cpumask for '\''job1'\'' is too big 00:19:14.811 cpumask for '\''job2'\'' is too big 00:19:14.811 Running I/O for 2 seconds... 00:19:14.811 00:19:14.811 Latency(us) 00:19:14.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380213.92 371.30 0.00 0.00 673.05 271.83 1310.72 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380199.86 371.29 0.00 0.00 672.85 210.39 1109.64 00:19:14.811 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:14.811 Malloc0 : 2.00 380265.08 371.35 0.00 0.00 672.58 66.09 1117.09 00:19:14.811 =================================================================================================================== 00:19:14.811 Total : 1140678.85 1113.94 0.00 0.00 672.83 66.09 1310.72' 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:19:14.811 18:33:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:19:14.812 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:19:14.812 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:19:14.812 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:19:14.812 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:19:14.812 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:14.812 18:33:07 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:18.096 18:33:10 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-15 18:33:07.203966] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:18.096 [2024-07-15 18:33:07.204189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:18.096 Using job config with 4 jobs 00:19:18.096 EAL: TSC is not safe to use in SMP mode 00:19:18.096 EAL: TSC is not invariant 00:19:18.096 [2024-07-15 18:33:07.854668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.096 [2024-07-15 18:33:07.957950] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:18.096 [2024-07-15 18:33:07.960323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.096 cpumask for '\''job0'\'' is too big 00:19:18.096 cpumask for '\''job1'\'' is too big 00:19:18.096 cpumask for '\''job2'\'' is too big 00:19:18.096 cpumask for '\''job3'\'' is too big 00:19:18.096 Running I/O for 2 seconds... 00:19:18.096 00:19:18.096 Latency(us) 00:19:18.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.096 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.096 Malloc0 : 2.00 150349.23 146.83 0.00 0.00 1702.32 692.60 4468.36 00:19:18.096 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.096 Malloc1 : 2.00 150342.18 146.82 0.00 0.00 1702.07 633.02 4438.57 00:19:18.096 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.096 Malloc0 : 2.00 150369.02 146.84 0.00 0.00 1700.89 618.12 3798.10 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150359.64 146.84 0.00 0.00 1700.72 595.78 3813.00 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150351.69 146.83 0.00 0.00 1700.08 651.64 3127.85 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150340.81 146.82 0.00 0.00 1699.98 580.89 3127.85 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150419.01 146.89 0.00 0.00 1698.26 348.16 2993.80 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150409.71 146.88 0.00 0.00 1698.02 231.80 2978.91 00:19:18.097 =================================================================================================================== 00:19:18.097 Total : 1202941.29 1174.75 0.00 0.00 1700.29 231.80 4468.36' 00:19:18.097 18:33:10 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-15 18:33:07.203966] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:18.097 [2024-07-15 18:33:07.204189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:18.097 Using job config with 4 jobs 00:19:18.097 EAL: TSC is not safe to use in SMP mode 00:19:18.097 EAL: TSC is not invariant 00:19:18.097 [2024-07-15 18:33:07.854668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.097 [2024-07-15 18:33:07.957950] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:18.097 [2024-07-15 18:33:07.960323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.097 cpumask for '\''job0'\'' is too big 00:19:18.097 cpumask for '\''job1'\'' is too big 00:19:18.097 cpumask for '\''job2'\'' is too big 00:19:18.097 cpumask for '\''job3'\'' is too big 00:19:18.097 Running I/O for 2 seconds... 00:19:18.097 00:19:18.097 Latency(us) 00:19:18.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150349.23 146.83 0.00 0.00 1702.32 692.60 4468.36 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150342.18 146.82 0.00 0.00 1702.07 633.02 4438.57 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150369.02 146.84 0.00 0.00 1700.89 618.12 3798.10 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150359.64 146.84 0.00 0.00 1700.72 595.78 3813.00 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150351.69 146.83 0.00 0.00 1700.08 651.64 3127.85 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150340.81 146.82 0.00 0.00 1699.98 580.89 3127.85 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150419.01 146.89 0.00 0.00 1698.26 348.16 2993.80 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150409.71 146.88 0.00 0.00 1698.02 231.80 2978.91 00:19:18.097 =================================================================================================================== 00:19:18.097 Total : 1202941.29 1174.75 0.00 0.00 1700.29 231.80 4468.36' 00:19:18.097 18:33:10 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 18:33:07.203966] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:18.097 [2024-07-15 18:33:07.204189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:18.097 Using job config with 4 jobs 00:19:18.097 EAL: TSC is not safe to use in SMP mode 00:19:18.097 EAL: TSC is not invariant 00:19:18.097 [2024-07-15 18:33:07.854668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.097 [2024-07-15 18:33:07.957950] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:18.097 [2024-07-15 18:33:07.960323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.097 cpumask for '\''job0'\'' is too big 00:19:18.097 cpumask for '\''job1'\'' is too big 00:19:18.097 cpumask for '\''job2'\'' is too big 00:19:18.097 cpumask for '\''job3'\'' is too big 00:19:18.097 Running I/O for 2 seconds... 00:19:18.097 00:19:18.097 Latency(us) 00:19:18.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150349.23 146.83 0.00 0.00 1702.32 692.60 4468.36 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150342.18 146.82 0.00 0.00 1702.07 633.02 4438.57 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150369.02 146.84 0.00 0.00 1700.89 618.12 3798.10 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150359.64 146.84 0.00 0.00 1700.72 595.78 3813.00 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150351.69 146.83 0.00 0.00 1700.08 651.64 3127.85 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150340.81 146.82 0.00 0.00 1699.98 580.89 3127.85 00:19:18.097 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc0 : 2.00 150419.01 146.89 0.00 0.00 1698.26 348.16 2993.80 00:19:18.097 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:18.097 Malloc1 : 2.00 150409.71 146.88 0.00 0.00 1698.02 231.80 2978.91 00:19:18.097 =================================================================================================================== 00:19:18.097 Total : 1202941.29 1174.75 0.00 0.00 1700.29 231.80 4468.36' 00:19:18.097 18:33:10 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:19:18.097 18:33:10 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:19:18.097 18:33:10 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:19:18.097 18:33:10 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:19:18.097 18:33:10 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:18.097 18:33:10 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:18.097 00:19:18.097 real 0m12.318s 00:19:18.097 user 0m9.575s 00:19:18.097 sys 0m2.761s 00:19:18.097 18:33:10 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:18.097 18:33:10 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:19:18.097 ************************************ 00:19:18.097 END TEST bdevperf_config 00:19:18.097 ************************************ 00:19:18.097 18:33:10 -- common/autotest_common.sh@1142 -- # return 0 00:19:18.097 18:33:10 -- spdk/autotest.sh@192 -- # uname -s 00:19:18.097 18:33:10 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:19:18.097 18:33:10 -- spdk/autotest.sh@198 -- # uname -s 00:19:18.097 18:33:10 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:19:18.097 18:33:10 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:19:18.097 18:33:10 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:18.097 18:33:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:18.097 18:33:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:18.097 18:33:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.097 ************************************ 00:19:18.097 START TEST blockdev_nvme 00:19:18.097 ************************************ 00:19:18.097 18:33:10 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:18.097 * Looking for test storage... 00:19:18.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:18.097 18:33:10 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:19:18.097 18:33:10 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:19:18.098 18:33:10 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:19:18.098 18:33:10 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68337 00:19:18.098 18:33:10 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:18.098 18:33:10 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:18.098 18:33:10 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 68337 00:19:18.098 18:33:10 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 68337 ']' 00:19:18.098 18:33:10 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.098 18:33:10 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.098 18:33:10 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.098 18:33:10 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.098 18:33:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.098 [2024-07-15 18:33:10.458751] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:18.098 [2024-07-15 18:33:10.458945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:18.664 EAL: TSC is not safe to use in SMP mode 00:19:18.664 EAL: TSC is not invariant 00:19:18.664 [2024-07-15 18:33:11.045493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.926 [2024-07-15 18:33:11.163666] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:18.926 [2024-07-15 18:33:11.166275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.193 18:33:11 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.193 18:33:11 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:19:19.193 18:33:11 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:19:19.193 18:33:11 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:19:19.193 18:33:11 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:19:19.193 18:33:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:19:19.193 18:33:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.451 [2024-07-15 18:33:11.630951] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b0b5463c-42d8-11ef-9ade-d5fc5159efa5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b0b5463c-42d8-11ef-9ade-d5fc5159efa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:19:19.451 18:33:11 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 68337 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 68337 ']' 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 68337 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@956 -- # ps -c -o command 68337 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@956 -- # tail -1 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:19:19.451 killing process with pid 68337 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68337' 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 68337 00:19:19.451 18:33:11 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 68337 00:19:19.709 18:33:12 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:19.709 18:33:12 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:19.709 18:33:12 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:19.709 18:33:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.709 18:33:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.709 ************************************ 00:19:19.709 START TEST bdev_hello_world 00:19:19.709 ************************************ 00:19:19.709 18:33:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:19.967 [2024-07-15 18:33:12.115293] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:19.967 [2024-07-15 18:33:12.115584] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:20.534 EAL: TSC is not safe to use in SMP mode 00:19:20.534 EAL: TSC is not invariant 00:19:20.534 [2024-07-15 18:33:12.731265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.534 [2024-07-15 18:33:12.836954] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:20.534 [2024-07-15 18:33:12.839122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.534 [2024-07-15 18:33:12.897748] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:20.792 [2024-07-15 18:33:12.971136] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:20.792 [2024-07-15 18:33:12.971195] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:19:20.793 [2024-07-15 18:33:12.971209] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:20.793 [2024-07-15 18:33:12.972138] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:20.793 [2024-07-15 18:33:12.972447] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:20.793 [2024-07-15 18:33:12.972465] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:20.793 [2024-07-15 18:33:12.972764] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:20.793 00:19:20.793 [2024-07-15 18:33:12.972783] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:20.793 00:19:20.793 real 0m1.085s 00:19:20.793 user 0m0.430s 00:19:20.793 sys 0m0.653s 00:19:20.793 18:33:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.793 18:33:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:20.793 ************************************ 00:19:20.793 END TEST bdev_hello_world 00:19:20.793 ************************************ 00:19:21.057 18:33:13 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:21.057 18:33:13 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:19:21.057 18:33:13 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:21.057 18:33:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.057 18:33:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:21.058 ************************************ 00:19:21.058 START TEST bdev_bounds 00:19:21.058 ************************************ 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68408 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:21.058 Process bdevio pid: 68408 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68408' 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68408 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68408 ']' 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.058 18:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:21.058 [2024-07-15 18:33:13.248857] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:21.058 [2024-07-15 18:33:13.249121] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:21.626 EAL: TSC is not safe to use in SMP mode 00:19:21.626 EAL: TSC is not invariant 00:19:21.626 [2024-07-15 18:33:13.847435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.626 [2024-07-15 18:33:13.956570] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:21.626 [2024-07-15 18:33:13.956626] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:21.626 [2024-07-15 18:33:13.956636] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:19:21.626 [2024-07-15 18:33:13.960243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.626 [2024-07-15 18:33:13.960145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.626 [2024-07-15 18:33:13.960237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.626 [2024-07-15 18:33:14.018788] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:22.196 I/O targets: 00:19:22.196 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:22.196 00:19:22.196 00:19:22.196 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.196 http://cunit.sourceforge.net/ 00:19:22.196 00:19:22.196 00:19:22.196 Suite: bdevio tests on: Nvme0n1 00:19:22.196 Test: blockdev write read block ...passed 00:19:22.196 Test: blockdev write zeroes read block ...passed 00:19:22.196 Test: blockdev write zeroes read no split ...passed 00:19:22.196 Test: blockdev write zeroes read split ...passed 00:19:22.196 Test: blockdev write zeroes read split partial ...passed 00:19:22.196 Test: blockdev reset ...[2024-07-15 18:33:14.481562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:22.196 [2024-07-15 18:33:14.482878] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.196 passed 00:19:22.196 Test: blockdev write read 8 blocks ...passed 00:19:22.196 Test: blockdev write read size > 128k ...passed 00:19:22.196 Test: blockdev write read invalid size ...passed 00:19:22.196 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.196 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.196 Test: blockdev write read max offset ...passed 00:19:22.196 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.196 Test: blockdev writev readv 8 blocks ...passed 00:19:22.196 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.196 Test: blockdev writev readv block ...passed 00:19:22.196 Test: blockdev writev readv size > 128k ...passed 00:19:22.196 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.196 Test: blockdev comparev and writev ...[2024-07-15 18:33:14.487147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x152718000 len:0x1000 00:19:22.196 [2024-07-15 18:33:14.487195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:22.196 passed 00:19:22.196 Test: blockdev nvme passthru rw ...passed 00:19:22.196 Test: blockdev nvme passthru vendor specific ...passed 00:19:22.196 Test: blockdev nvme admin passthru ...[2024-07-15 18:33:14.487698] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:22.196 [2024-07-15 18:33:14.487718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:22.196 passed 00:19:22.196 Test: blockdev copy ...passed 00:19:22.196 00:19:22.196 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.196 suites 1 1 n/a 0 0 00:19:22.196 tests 23 23 23 0 0 00:19:22.196 asserts 152 152 152 0 n/a 00:19:22.196 00:19:22.196 Elapsed time = 0.023 seconds 00:19:22.196 0 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68408 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68408 ']' 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68408 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 68408 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:19:22.196 killing process with pid 68408 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68408' 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68408 00:19:22.196 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68408 00:19:22.454 18:33:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:19:22.454 00:19:22.454 real 0m1.500s 00:19:22.454 user 0m2.709s 00:19:22.454 sys 0m0.742s 00:19:22.454 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.454 18:33:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:22.454 ************************************ 00:19:22.454 END TEST bdev_bounds 00:19:22.454 ************************************ 00:19:22.454 18:33:14 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:22.454 18:33:14 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:19:22.454 18:33:14 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:22.454 18:33:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.454 18:33:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.454 ************************************ 00:19:22.454 START TEST bdev_nbd 00:19:22.454 ************************************ 00:19:22.454 18:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:19:22.454 18:33:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:19:22.454 18:33:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:19:22.454 18:33:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:19:22.454 00:19:22.454 real 0m0.005s 00:19:22.454 user 0m0.000s 00:19:22.454 sys 0m0.007s 00:19:22.454 18:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.454 ************************************ 00:19:22.454 END TEST bdev_nbd 00:19:22.454 ************************************ 00:19:22.454 18:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:22.454 18:33:14 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:22.454 18:33:14 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:19:22.454 18:33:14 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:19:22.454 skipping fio tests on NVMe due to multi-ns failures. 00:19:22.455 18:33:14 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:19:22.455 18:33:14 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:22.455 18:33:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:22.455 18:33:14 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:22.455 18:33:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.455 18:33:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.455 ************************************ 00:19:22.455 START TEST bdev_verify 00:19:22.455 ************************************ 00:19:22.455 18:33:14 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:22.455 [2024-07-15 18:33:14.847065] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:22.455 [2024-07-15 18:33:14.847354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:23.388 EAL: TSC is not safe to use in SMP mode 00:19:23.388 EAL: TSC is not invariant 00:19:23.388 [2024-07-15 18:33:15.445713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:23.388 [2024-07-15 18:33:15.553515] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:23.388 [2024-07-15 18:33:15.553571] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:23.388 [2024-07-15 18:33:15.556350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.388 [2024-07-15 18:33:15.556339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.388 [2024-07-15 18:33:15.614498] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:23.388 Running I/O for 5 seconds... 00:19:28.707 00:19:28.707 Latency(us) 00:19:28.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.707 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:28.707 Verification LBA range: start 0x0 length 0xa0000 00:19:28.707 Nvme0n1 : 5.01 21423.39 83.69 0.00 0.00 5966.07 733.56 9234.61 00:19:28.707 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:28.707 Verification LBA range: start 0xa0000 length 0xa0000 00:19:28.707 Nvme0n1 : 5.01 21146.86 82.60 0.00 0.00 6043.74 748.45 9055.87 00:19:28.707 =================================================================================================================== 00:19:28.707 Total : 42570.25 166.29 0.00 0.00 6004.65 733.56 9234.61 00:19:29.275 00:19:29.275 real 0m6.603s 00:19:29.275 user 0m11.679s 00:19:29.275 sys 0m0.632s 00:19:29.275 18:33:21 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:29.275 ************************************ 00:19:29.275 END TEST bdev_verify 00:19:29.275 18:33:21 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:29.275 ************************************ 00:19:29.275 18:33:21 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:29.275 18:33:21 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:29.275 18:33:21 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:29.275 18:33:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.275 18:33:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:29.275 ************************************ 00:19:29.275 START TEST bdev_verify_big_io 00:19:29.275 ************************************ 00:19:29.275 18:33:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:29.275 [2024-07-15 18:33:21.501908] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:29.275 [2024-07-15 18:33:21.502163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:29.842 EAL: TSC is not safe to use in SMP mode 00:19:29.842 EAL: TSC is not invariant 00:19:29.842 [2024-07-15 18:33:22.118171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:29.842 [2024-07-15 18:33:22.235139] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:29.842 [2024-07-15 18:33:22.235210] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:29.842 [2024-07-15 18:33:22.238640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.842 [2024-07-15 18:33:22.238626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.100 [2024-07-15 18:33:22.297727] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:30.101 Running I/O for 5 seconds... 00:19:35.379 00:19:35.379 Latency(us) 00:19:35.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.379 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:35.379 Verification LBA range: start 0x0 length 0xa000 00:19:35.379 Nvme0n1 : 5.01 8207.14 512.95 0.00 0.00 15513.86 210.39 32648.81 00:19:35.379 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:35.379 Verification LBA range: start 0xa000 length 0xa000 00:19:35.379 Nvme0n1 : 5.01 8204.64 512.79 0.00 0.00 15510.68 90.76 25737.75 00:19:35.379 =================================================================================================================== 00:19:35.379 Total : 16411.78 1025.74 0.00 0.00 15512.27 90.76 32648.81 00:19:38.667 00:19:38.667 real 0m9.295s 00:19:38.667 user 0m17.000s 00:19:38.667 sys 0m0.660s 00:19:38.667 18:33:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:38.667 18:33:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.667 ************************************ 00:19:38.667 END TEST bdev_verify_big_io 00:19:38.667 ************************************ 00:19:38.667 18:33:30 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:38.667 18:33:30 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:38.667 18:33:30 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:38.667 18:33:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.667 18:33:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:38.667 ************************************ 00:19:38.667 START TEST bdev_write_zeroes 00:19:38.667 ************************************ 00:19:38.667 18:33:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:38.667 [2024-07-15 18:33:30.843883] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:38.667 [2024-07-15 18:33:30.844080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:39.235 EAL: TSC is not safe to use in SMP mode 00:19:39.235 EAL: TSC is not invariant 00:19:39.235 [2024-07-15 18:33:31.536330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.494 [2024-07-15 18:33:31.654960] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:39.494 [2024-07-15 18:33:31.657159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.494 [2024-07-15 18:33:31.715977] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:39.494 Running I/O for 1 seconds... 00:19:40.429 00:19:40.429 Latency(us) 00:19:40.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.429 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:40.429 Nvme0n1 : 1.00 66838.62 261.09 0.00 0.00 1913.61 562.27 12094.36 00:19:40.429 =================================================================================================================== 00:19:40.429 Total : 66838.62 261.09 0.00 0.00 1913.61 562.27 12094.36 00:19:40.688 00:19:40.688 real 0m2.186s 00:19:40.688 user 0m1.455s 00:19:40.688 sys 0m0.716s 00:19:40.688 18:33:33 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:40.688 18:33:33 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:40.688 ************************************ 00:19:40.688 END TEST bdev_write_zeroes 00:19:40.688 ************************************ 00:19:40.688 18:33:33 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:40.688 18:33:33 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:40.688 18:33:33 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:40.688 18:33:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.688 18:33:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:40.688 ************************************ 00:19:40.688 START TEST bdev_json_nonenclosed 00:19:40.688 ************************************ 00:19:40.688 18:33:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:40.688 [2024-07-15 18:33:33.074206] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:40.688 [2024-07-15 18:33:33.074437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:41.625 EAL: TSC is not safe to use in SMP mode 00:19:41.625 EAL: TSC is not invariant 00:19:41.625 [2024-07-15 18:33:33.695042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.625 [2024-07-15 18:33:33.813907] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:41.625 [2024-07-15 18:33:33.816434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.625 [2024-07-15 18:33:33.816489] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:41.625 [2024-07-15 18:33:33.816503] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:41.625 [2024-07-15 18:33:33.816514] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:41.625 00:19:41.625 real 0m0.922s 00:19:41.625 user 0m0.257s 00:19:41.625 sys 0m0.666s 00:19:41.625 18:33:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:19:41.625 18:33:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.625 18:33:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:41.625 ************************************ 00:19:41.625 END TEST bdev_json_nonenclosed 00:19:41.625 ************************************ 00:19:41.625 18:33:34 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:19:41.625 18:33:34 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:19:41.625 18:33:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.625 18:33:34 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:41.625 18:33:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.625 18:33:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:41.884 ************************************ 00:19:41.884 START TEST bdev_json_nonarray 00:19:41.884 ************************************ 00:19:41.884 18:33:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.884 [2024-07-15 18:33:34.047368] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:19:41.884 [2024-07-15 18:33:34.047610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:42.450 EAL: TSC is not safe to use in SMP mode 00:19:42.450 EAL: TSC is not invariant 00:19:42.450 [2024-07-15 18:33:34.666045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.450 [2024-07-15 18:33:34.782208] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:42.450 [2024-07-15 18:33:34.784914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.450 [2024-07-15 18:33:34.784968] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:42.450 [2024-07-15 18:33:34.784982] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:42.450 [2024-07-15 18:33:34.784992] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:42.708 00:19:42.708 real 0m0.910s 00:19:42.708 user 0m0.254s 00:19:42.708 sys 0m0.654s 00:19:42.708 18:33:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:19:42.708 18:33:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:42.708 18:33:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:42.708 ************************************ 00:19:42.708 END TEST bdev_json_nonarray 00:19:42.708 ************************************ 00:19:42.708 18:33:34 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:19:42.708 18:33:34 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:19:42.708 00:19:42.708 real 0m24.692s 00:19:42.708 user 0m35.669s 00:19:42.708 sys 0m5.713s 00:19:42.708 18:33:34 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:42.708 18:33:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.708 ************************************ 00:19:42.708 END TEST blockdev_nvme 00:19:42.708 ************************************ 00:19:42.708 18:33:35 -- common/autotest_common.sh@1142 -- # return 0 00:19:42.708 18:33:35 -- spdk/autotest.sh@213 -- # uname -s 00:19:42.708 18:33:35 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:19:42.708 18:33:35 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:42.708 18:33:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:42.708 18:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.708 18:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:42.708 ************************************ 00:19:42.708 START TEST nvme 00:19:42.708 ************************************ 00:19:42.708 18:33:35 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:42.966 * Looking for test storage... 00:19:42.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:42.966 18:33:35 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:42.966 hw.nic_uio.bdfs="0:16:0" 00:19:43.223 18:33:35 nvme -- nvme/nvme.sh@79 -- # uname 00:19:43.223 18:33:35 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:19:43.223 18:33:35 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:43.223 18:33:35 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:19:43.223 18:33:35 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.223 18:33:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:43.223 ************************************ 00:19:43.223 START TEST nvme_reset 00:19:43.223 ************************************ 00:19:43.223 18:33:35 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:43.789 EAL: TSC is not safe to use in SMP mode 00:19:43.789 EAL: TSC is not invariant 00:19:43.789 [2024-07-15 18:33:35.979635] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:43.789 Initializing NVMe Controllers 00:19:43.789 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:43.789 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:43.789 00:19:43.789 real 0m0.642s 00:19:43.789 user 0m0.005s 00:19:43.789 sys 0m0.637s 00:19:43.789 18:33:36 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.789 18:33:36 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:43.789 ************************************ 00:19:43.789 END TEST nvme_reset 00:19:43.789 ************************************ 00:19:43.789 18:33:36 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:43.789 18:33:36 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:43.789 18:33:36 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:43.789 18:33:36 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.789 18:33:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:43.789 ************************************ 00:19:43.789 START TEST nvme_identify 00:19:43.789 ************************************ 00:19:43.789 18:33:36 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:19:43.789 18:33:36 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:43.789 18:33:36 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:43.789 18:33:36 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:43.789 18:33:36 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:43.789 18:33:36 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:43.789 18:33:36 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:19:43.789 18:33:36 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:43.789 18:33:36 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:43.789 18:33:36 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:43.789 18:33:36 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:43.790 18:33:36 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:43.790 18:33:36 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:44.397 EAL: TSC is not safe to use in SMP mode 00:19:44.397 EAL: TSC is not invariant 00:19:44.397 [2024-07-15 18:33:36.700457] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:44.397 ===================================================== 00:19:44.397 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:44.397 ===================================================== 00:19:44.397 Controller Capabilities/Features 00:19:44.397 ================================ 00:19:44.397 Vendor ID: 1b36 00:19:44.397 Subsystem Vendor ID: 1af4 00:19:44.397 Serial Number: 12340 00:19:44.397 Model Number: QEMU NVMe Ctrl 00:19:44.397 Firmware Version: 8.0.0 00:19:44.397 Recommended Arb Burst: 6 00:19:44.397 IEEE OUI Identifier: 00 54 52 00:19:44.397 Multi-path I/O 00:19:44.397 May have multiple subsystem ports: No 00:19:44.397 May have multiple controllers: No 00:19:44.397 Associated with SR-IOV VF: No 00:19:44.397 Max Data Transfer Size: 524288 00:19:44.397 Max Number of Namespaces: 256 00:19:44.397 Max Number of I/O Queues: 64 00:19:44.397 NVMe Specification Version (VS): 1.4 00:19:44.397 NVMe Specification Version (Identify): 1.4 00:19:44.397 Maximum Queue Entries: 2048 00:19:44.397 Contiguous Queues Required: Yes 00:19:44.397 Arbitration Mechanisms Supported 00:19:44.397 Weighted Round Robin: Not Supported 00:19:44.397 Vendor Specific: Not Supported 00:19:44.397 Reset Timeout: 7500 ms 00:19:44.397 Doorbell Stride: 4 bytes 00:19:44.397 NVM Subsystem Reset: Not Supported 00:19:44.397 Command Sets Supported 00:19:44.397 NVM Command Set: Supported 00:19:44.397 Boot Partition: Not Supported 00:19:44.397 Memory Page Size Minimum: 4096 bytes 00:19:44.397 Memory Page Size Maximum: 65536 bytes 00:19:44.397 Persistent Memory Region: Not Supported 00:19:44.397 Optional Asynchronous Events Supported 00:19:44.397 Namespace Attribute Notices: Supported 00:19:44.397 Firmware Activation Notices: Not Supported 00:19:44.397 ANA Change Notices: Not Supported 00:19:44.397 PLE Aggregate Log Change Notices: Not Supported 00:19:44.397 LBA Status Info Alert Notices: Not Supported 00:19:44.397 EGE Aggregate Log Change Notices: Not Supported 00:19:44.397 Normal NVM Subsystem Shutdown event: Not Supported 00:19:44.397 Zone Descriptor Change Notices: Not Supported 00:19:44.397 Discovery Log Change Notices: Not Supported 00:19:44.397 Controller Attributes 00:19:44.397 128-bit Host Identifier: Not Supported 00:19:44.397 Non-Operational Permissive Mode: Not Supported 00:19:44.397 NVM Sets: Not Supported 00:19:44.397 Read Recovery Levels: Not Supported 00:19:44.397 Endurance Groups: Not Supported 00:19:44.397 Predictable Latency Mode: Not Supported 00:19:44.397 Traffic Based Keep ALive: Not Supported 00:19:44.397 Namespace Granularity: Not Supported 00:19:44.397 SQ Associations: Not Supported 00:19:44.397 UUID List: Not Supported 00:19:44.397 Multi-Domain Subsystem: Not Supported 00:19:44.397 Fixed Capacity Management: Not Supported 00:19:44.397 Variable Capacity Management: Not Supported 00:19:44.397 Delete Endurance Group: Not Supported 00:19:44.397 Delete NVM Set: Not Supported 00:19:44.397 Extended LBA Formats Supported: Supported 00:19:44.397 Flexible Data Placement Supported: Not Supported 00:19:44.397 00:19:44.397 Controller Memory Buffer Support 00:19:44.397 ================================ 00:19:44.397 Supported: No 00:19:44.397 00:19:44.397 Persistent Memory Region Support 00:19:44.397 ================================ 00:19:44.397 Supported: No 00:19:44.397 00:19:44.397 Admin Command Set Attributes 00:19:44.397 ============================ 00:19:44.397 Security Send/Receive: Not Supported 00:19:44.397 Format NVM: Supported 00:19:44.397 Firmware Activate/Download: Not Supported 00:19:44.397 Namespace Management: Supported 00:19:44.397 Device Self-Test: Not Supported 00:19:44.397 Directives: Supported 00:19:44.397 NVMe-MI: Not Supported 00:19:44.397 Virtualization Management: Not Supported 00:19:44.397 Doorbell Buffer Config: Supported 00:19:44.398 Get LBA Status Capability: Not Supported 00:19:44.398 Command & Feature Lockdown Capability: Not Supported 00:19:44.398 Abort Command Limit: 4 00:19:44.398 Async Event Request Limit: 4 00:19:44.398 Number of Firmware Slots: N/A 00:19:44.398 Firmware Slot 1 Read-Only: N/A 00:19:44.398 Firmware Activation Without Reset: N/A 00:19:44.398 Multiple Update Detection Support: N/A 00:19:44.398 Firmware Update Granularity: No Information Provided 00:19:44.398 Per-Namespace SMART Log: Yes 00:19:44.398 Asymmetric Namespace Access Log Page: Not Supported 00:19:44.398 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:44.398 Command Effects Log Page: Supported 00:19:44.398 Get Log Page Extended Data: Supported 00:19:44.398 Telemetry Log Pages: Not Supported 00:19:44.398 Persistent Event Log Pages: Not Supported 00:19:44.398 Supported Log Pages Log Page: May Support 00:19:44.398 Commands Supported & Effects Log Page: Not Supported 00:19:44.398 Feature Identifiers & Effects Log Page:May Support 00:19:44.398 NVMe-MI Commands & Effects Log Page: May Support 00:19:44.398 Data Area 4 for Telemetry Log: Not Supported 00:19:44.398 Error Log Page Entries Supported: 1 00:19:44.398 Keep Alive: Not Supported 00:19:44.398 00:19:44.398 NVM Command Set Attributes 00:19:44.398 ========================== 00:19:44.398 Submission Queue Entry Size 00:19:44.398 Max: 64 00:19:44.398 Min: 64 00:19:44.398 Completion Queue Entry Size 00:19:44.398 Max: 16 00:19:44.398 Min: 16 00:19:44.398 Number of Namespaces: 256 00:19:44.398 Compare Command: Supported 00:19:44.398 Write Uncorrectable Command: Not Supported 00:19:44.398 Dataset Management Command: Supported 00:19:44.398 Write Zeroes Command: Supported 00:19:44.398 Set Features Save Field: Supported 00:19:44.398 Reservations: Not Supported 00:19:44.398 Timestamp: Supported 00:19:44.398 Copy: Supported 00:19:44.398 Volatile Write Cache: Present 00:19:44.398 Atomic Write Unit (Normal): 1 00:19:44.398 Atomic Write Unit (PFail): 1 00:19:44.398 Atomic Compare & Write Unit: 1 00:19:44.398 Fused Compare & Write: Not Supported 00:19:44.398 Scatter-Gather List 00:19:44.398 SGL Command Set: Supported 00:19:44.398 SGL Keyed: Not Supported 00:19:44.398 SGL Bit Bucket Descriptor: Not Supported 00:19:44.398 SGL Metadata Pointer: Not Supported 00:19:44.398 Oversized SGL: Not Supported 00:19:44.398 SGL Metadata Address: Not Supported 00:19:44.398 SGL Offset: Not Supported 00:19:44.398 Transport SGL Data Block: Not Supported 00:19:44.398 Replay Protected Memory Block: Not Supported 00:19:44.398 00:19:44.398 Firmware Slot Information 00:19:44.398 ========================= 00:19:44.398 Active slot: 1 00:19:44.398 Slot 1 Firmware Revision: 1.0 00:19:44.398 00:19:44.398 00:19:44.398 Commands Supported and Effects 00:19:44.398 ============================== 00:19:44.398 Admin Commands 00:19:44.398 -------------- 00:19:44.398 Delete I/O Submission Queue (00h): Supported 00:19:44.398 Create I/O Submission Queue (01h): Supported 00:19:44.398 Get Log Page (02h): Supported 00:19:44.398 Delete I/O Completion Queue (04h): Supported 00:19:44.398 Create I/O Completion Queue (05h): Supported 00:19:44.398 Identify (06h): Supported 00:19:44.398 Abort (08h): Supported 00:19:44.398 Set Features (09h): Supported 00:19:44.398 Get Features (0Ah): Supported 00:19:44.398 Asynchronous Event Request (0Ch): Supported 00:19:44.398 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:44.398 Directive Send (19h): Supported 00:19:44.398 Directive Receive (1Ah): Supported 00:19:44.398 Virtualization Management (1Ch): Supported 00:19:44.398 Doorbell Buffer Config (7Ch): Supported 00:19:44.398 Format NVM (80h): Supported LBA-Change 00:19:44.398 I/O Commands 00:19:44.398 ------------ 00:19:44.398 Flush (00h): Supported LBA-Change 00:19:44.398 Write (01h): Supported LBA-Change 00:19:44.398 Read (02h): Supported 00:19:44.398 Compare (05h): Supported 00:19:44.398 Write Zeroes (08h): Supported LBA-Change 00:19:44.398 Dataset Management (09h): Supported LBA-Change 00:19:44.398 Unknown (0Ch): Supported 00:19:44.398 Unknown (12h): Supported 00:19:44.398 Copy (19h): Supported LBA-Change 00:19:44.398 Unknown (1Dh): Supported LBA-Change 00:19:44.398 00:19:44.398 Error Log 00:19:44.398 ========= 00:19:44.398 00:19:44.398 Arbitration 00:19:44.398 =========== 00:19:44.398 Arbitration Burst: no limit 00:19:44.398 00:19:44.398 Power Management 00:19:44.398 ================ 00:19:44.398 Number of Power States: 1 00:19:44.398 Current Power State: Power State #0 00:19:44.398 Power State #0: 00:19:44.398 Max Power: 25.00 W 00:19:44.398 Non-Operational State: Operational 00:19:44.398 Entry Latency: 16 microseconds 00:19:44.398 Exit Latency: 4 microseconds 00:19:44.398 Relative Read Throughput: 0 00:19:44.398 Relative Read Latency: 0 00:19:44.398 Relative Write Throughput: 0 00:19:44.398 Relative Write Latency: 0 00:19:44.398 Idle Power: Not Reported 00:19:44.398 Active Power: Not Reported 00:19:44.398 Non-Operational Permissive Mode: Not Supported 00:19:44.399 00:19:44.399 Health Information 00:19:44.399 ================== 00:19:44.399 Critical Warnings: 00:19:44.399 Available Spare Space: OK 00:19:44.399 Temperature: OK 00:19:44.399 Device Reliability: OK 00:19:44.399 Read Only: No 00:19:44.399 Volatile Memory Backup: OK 00:19:44.399 Current Temperature: 323 Kelvin (50 Celsius) 00:19:44.399 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:44.399 Available Spare: 0% 00:19:44.399 Available Spare Threshold: 0% 00:19:44.399 Life Percentage Used: 0% 00:19:44.399 Data Units Read: 12249 00:19:44.399 Data Units Written: 12233 00:19:44.399 Host Read Commands: 295592 00:19:44.399 Host Write Commands: 295441 00:19:44.399 Controller Busy Time: 0 minutes 00:19:44.399 Power Cycles: 0 00:19:44.399 Power On Hours: 0 hours 00:19:44.399 Unsafe Shutdowns: 0 00:19:44.399 Unrecoverable Media Errors: 0 00:19:44.399 Lifetime Error Log Entries: 0 00:19:44.399 Warning Temperature Time: 0 minutes 00:19:44.399 Critical Temperature Time: 0 minutes 00:19:44.399 00:19:44.399 Number of Queues 00:19:44.399 ================ 00:19:44.399 Number of I/O Submission Queues: 64 00:19:44.399 Number of I/O Completion Queues: 64 00:19:44.399 00:19:44.399 ZNS Specific Controller Data 00:19:44.399 ============================ 00:19:44.399 Zone Append Size Limit: 0 00:19:44.399 00:19:44.399 00:19:44.399 Active Namespaces 00:19:44.399 ================= 00:19:44.399 Namespace ID:1 00:19:44.399 Error Recovery Timeout: Unlimited 00:19:44.399 Command Set Identifier: NVM (00h) 00:19:44.399 Deallocate: Supported 00:19:44.399 Deallocated/Unwritten Error: Supported 00:19:44.399 Deallocated Read Value: All 0x00 00:19:44.399 Deallocate in Write Zeroes: Not Supported 00:19:44.399 Deallocated Guard Field: 0xFFFF 00:19:44.399 Flush: Supported 00:19:44.399 Reservation: Not Supported 00:19:44.399 Namespace Sharing Capabilities: Private 00:19:44.399 Size (in LBAs): 1310720 (5GiB) 00:19:44.399 Capacity (in LBAs): 1310720 (5GiB) 00:19:44.399 Utilization (in LBAs): 1310720 (5GiB) 00:19:44.399 Thin Provisioning: Not Supported 00:19:44.399 Per-NS Atomic Units: No 00:19:44.399 Maximum Single Source Range Length: 128 00:19:44.399 Maximum Copy Length: 128 00:19:44.399 Maximum Source Range Count: 128 00:19:44.399 NGUID/EUI64 Never Reused: No 00:19:44.399 Namespace Write Protected: No 00:19:44.399 Number of LBA Formats: 8 00:19:44.399 Current LBA Format: LBA Format #04 00:19:44.399 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:44.399 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:44.399 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:44.399 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:44.399 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:44.399 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:44.399 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:44.399 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:44.399 00:19:44.399 NVM Specific Namespace Data 00:19:44.399 =========================== 00:19:44.399 Logical Block Storage Tag Mask: 0 00:19:44.399 Protection Information Capabilities: 00:19:44.399 16b Guard Protection Information Storage Tag Support: No 00:19:44.399 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:44.399 Storage Tag Check Read Support: No 00:19:44.399 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:44.399 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:44.399 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:44.399 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:44.399 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:44.399 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:44.399 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:44.399 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:44.399 18:33:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:44.399 18:33:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:45.333 EAL: TSC is not safe to use in SMP mode 00:19:45.333 EAL: TSC is not invariant 00:19:45.333 [2024-07-15 18:33:37.394293] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:45.333 ===================================================== 00:19:45.333 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:45.333 ===================================================== 00:19:45.333 Controller Capabilities/Features 00:19:45.333 ================================ 00:19:45.333 Vendor ID: 1b36 00:19:45.333 Subsystem Vendor ID: 1af4 00:19:45.333 Serial Number: 12340 00:19:45.333 Model Number: QEMU NVMe Ctrl 00:19:45.333 Firmware Version: 8.0.0 00:19:45.333 Recommended Arb Burst: 6 00:19:45.333 IEEE OUI Identifier: 00 54 52 00:19:45.333 Multi-path I/O 00:19:45.333 May have multiple subsystem ports: No 00:19:45.333 May have multiple controllers: No 00:19:45.333 Associated with SR-IOV VF: No 00:19:45.333 Max Data Transfer Size: 524288 00:19:45.333 Max Number of Namespaces: 256 00:19:45.333 Max Number of I/O Queues: 64 00:19:45.333 NVMe Specification Version (VS): 1.4 00:19:45.333 NVMe Specification Version (Identify): 1.4 00:19:45.333 Maximum Queue Entries: 2048 00:19:45.333 Contiguous Queues Required: Yes 00:19:45.333 Arbitration Mechanisms Supported 00:19:45.333 Weighted Round Robin: Not Supported 00:19:45.333 Vendor Specific: Not Supported 00:19:45.333 Reset Timeout: 7500 ms 00:19:45.333 Doorbell Stride: 4 bytes 00:19:45.334 NVM Subsystem Reset: Not Supported 00:19:45.334 Command Sets Supported 00:19:45.334 NVM Command Set: Supported 00:19:45.334 Boot Partition: Not Supported 00:19:45.334 Memory Page Size Minimum: 4096 bytes 00:19:45.334 Memory Page Size Maximum: 65536 bytes 00:19:45.334 Persistent Memory Region: Not Supported 00:19:45.334 Optional Asynchronous Events Supported 00:19:45.334 Namespace Attribute Notices: Supported 00:19:45.334 Firmware Activation Notices: Not Supported 00:19:45.334 ANA Change Notices: Not Supported 00:19:45.334 PLE Aggregate Log Change Notices: Not Supported 00:19:45.334 LBA Status Info Alert Notices: Not Supported 00:19:45.334 EGE Aggregate Log Change Notices: Not Supported 00:19:45.334 Normal NVM Subsystem Shutdown event: Not Supported 00:19:45.334 Zone Descriptor Change Notices: Not Supported 00:19:45.334 Discovery Log Change Notices: Not Supported 00:19:45.334 Controller Attributes 00:19:45.334 128-bit Host Identifier: Not Supported 00:19:45.334 Non-Operational Permissive Mode: Not Supported 00:19:45.334 NVM Sets: Not Supported 00:19:45.334 Read Recovery Levels: Not Supported 00:19:45.334 Endurance Groups: Not Supported 00:19:45.334 Predictable Latency Mode: Not Supported 00:19:45.334 Traffic Based Keep ALive: Not Supported 00:19:45.334 Namespace Granularity: Not Supported 00:19:45.334 SQ Associations: Not Supported 00:19:45.334 UUID List: Not Supported 00:19:45.334 Multi-Domain Subsystem: Not Supported 00:19:45.334 Fixed Capacity Management: Not Supported 00:19:45.334 Variable Capacity Management: Not Supported 00:19:45.334 Delete Endurance Group: Not Supported 00:19:45.334 Delete NVM Set: Not Supported 00:19:45.334 Extended LBA Formats Supported: Supported 00:19:45.334 Flexible Data Placement Supported: Not Supported 00:19:45.334 00:19:45.334 Controller Memory Buffer Support 00:19:45.334 ================================ 00:19:45.334 Supported: No 00:19:45.334 00:19:45.334 Persistent Memory Region Support 00:19:45.334 ================================ 00:19:45.334 Supported: No 00:19:45.334 00:19:45.334 Admin Command Set Attributes 00:19:45.334 ============================ 00:19:45.334 Security Send/Receive: Not Supported 00:19:45.334 Format NVM: Supported 00:19:45.334 Firmware Activate/Download: Not Supported 00:19:45.334 Namespace Management: Supported 00:19:45.334 Device Self-Test: Not Supported 00:19:45.334 Directives: Supported 00:19:45.334 NVMe-MI: Not Supported 00:19:45.334 Virtualization Management: Not Supported 00:19:45.334 Doorbell Buffer Config: Supported 00:19:45.334 Get LBA Status Capability: Not Supported 00:19:45.334 Command & Feature Lockdown Capability: Not Supported 00:19:45.334 Abort Command Limit: 4 00:19:45.334 Async Event Request Limit: 4 00:19:45.334 Number of Firmware Slots: N/A 00:19:45.334 Firmware Slot 1 Read-Only: N/A 00:19:45.334 Firmware Activation Without Reset: N/A 00:19:45.334 Multiple Update Detection Support: N/A 00:19:45.334 Firmware Update Granularity: No Information Provided 00:19:45.334 Per-Namespace SMART Log: Yes 00:19:45.334 Asymmetric Namespace Access Log Page: Not Supported 00:19:45.334 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:45.334 Command Effects Log Page: Supported 00:19:45.334 Get Log Page Extended Data: Supported 00:19:45.334 Telemetry Log Pages: Not Supported 00:19:45.334 Persistent Event Log Pages: Not Supported 00:19:45.334 Supported Log Pages Log Page: May Support 00:19:45.334 Commands Supported & Effects Log Page: Not Supported 00:19:45.334 Feature Identifiers & Effects Log Page:May Support 00:19:45.334 NVMe-MI Commands & Effects Log Page: May Support 00:19:45.334 Data Area 4 for Telemetry Log: Not Supported 00:19:45.334 Error Log Page Entries Supported: 1 00:19:45.334 Keep Alive: Not Supported 00:19:45.334 00:19:45.334 NVM Command Set Attributes 00:19:45.334 ========================== 00:19:45.334 Submission Queue Entry Size 00:19:45.334 Max: 64 00:19:45.334 Min: 64 00:19:45.334 Completion Queue Entry Size 00:19:45.334 Max: 16 00:19:45.334 Min: 16 00:19:45.334 Number of Namespaces: 256 00:19:45.334 Compare Command: Supported 00:19:45.334 Write Uncorrectable Command: Not Supported 00:19:45.334 Dataset Management Command: Supported 00:19:45.334 Write Zeroes Command: Supported 00:19:45.334 Set Features Save Field: Supported 00:19:45.334 Reservations: Not Supported 00:19:45.334 Timestamp: Supported 00:19:45.334 Copy: Supported 00:19:45.334 Volatile Write Cache: Present 00:19:45.334 Atomic Write Unit (Normal): 1 00:19:45.334 Atomic Write Unit (PFail): 1 00:19:45.334 Atomic Compare & Write Unit: 1 00:19:45.334 Fused Compare & Write: Not Supported 00:19:45.334 Scatter-Gather List 00:19:45.334 SGL Command Set: Supported 00:19:45.334 SGL Keyed: Not Supported 00:19:45.334 SGL Bit Bucket Descriptor: Not Supported 00:19:45.334 SGL Metadata Pointer: Not Supported 00:19:45.334 Oversized SGL: Not Supported 00:19:45.334 SGL Metadata Address: Not Supported 00:19:45.334 SGL Offset: Not Supported 00:19:45.334 Transport SGL Data Block: Not Supported 00:19:45.334 Replay Protected Memory Block: Not Supported 00:19:45.334 00:19:45.334 Firmware Slot Information 00:19:45.334 ========================= 00:19:45.334 Active slot: 1 00:19:45.334 Slot 1 Firmware Revision: 1.0 00:19:45.334 00:19:45.334 00:19:45.334 Commands Supported and Effects 00:19:45.334 ============================== 00:19:45.334 Admin Commands 00:19:45.334 -------------- 00:19:45.334 Delete I/O Submission Queue (00h): Supported 00:19:45.334 Create I/O Submission Queue (01h): Supported 00:19:45.334 Get Log Page (02h): Supported 00:19:45.334 Delete I/O Completion Queue (04h): Supported 00:19:45.334 Create I/O Completion Queue (05h): Supported 00:19:45.334 Identify (06h): Supported 00:19:45.334 Abort (08h): Supported 00:19:45.334 Set Features (09h): Supported 00:19:45.334 Get Features (0Ah): Supported 00:19:45.334 Asynchronous Event Request (0Ch): Supported 00:19:45.334 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:45.334 Directive Send (19h): Supported 00:19:45.334 Directive Receive (1Ah): Supported 00:19:45.334 Virtualization Management (1Ch): Supported 00:19:45.334 Doorbell Buffer Config (7Ch): Supported 00:19:45.334 Format NVM (80h): Supported LBA-Change 00:19:45.334 I/O Commands 00:19:45.334 ------------ 00:19:45.334 Flush (00h): Supported LBA-Change 00:19:45.334 Write (01h): Supported LBA-Change 00:19:45.334 Read (02h): Supported 00:19:45.334 Compare (05h): Supported 00:19:45.334 Write Zeroes (08h): Supported LBA-Change 00:19:45.334 Dataset Management (09h): Supported LBA-Change 00:19:45.334 Unknown (0Ch): Supported 00:19:45.334 Unknown (12h): Supported 00:19:45.334 Copy (19h): Supported LBA-Change 00:19:45.334 Unknown (1Dh): Supported LBA-Change 00:19:45.334 00:19:45.334 Error Log 00:19:45.334 ========= 00:19:45.334 00:19:45.334 Arbitration 00:19:45.334 =========== 00:19:45.334 Arbitration Burst: no limit 00:19:45.334 00:19:45.334 Power Management 00:19:45.334 ================ 00:19:45.334 Number of Power States: 1 00:19:45.334 Current Power State: Power State #0 00:19:45.334 Power State #0: 00:19:45.334 Max Power: 25.00 W 00:19:45.334 Non-Operational State: Operational 00:19:45.334 Entry Latency: 16 microseconds 00:19:45.334 Exit Latency: 4 microseconds 00:19:45.334 Relative Read Throughput: 0 00:19:45.334 Relative Read Latency: 0 00:19:45.334 Relative Write Throughput: 0 00:19:45.334 Relative Write Latency: 0 00:19:45.334 Idle Power: Not Reported 00:19:45.334 Active Power: Not Reported 00:19:45.334 Non-Operational Permissive Mode: Not Supported 00:19:45.334 00:19:45.334 Health Information 00:19:45.334 ================== 00:19:45.334 Critical Warnings: 00:19:45.334 Available Spare Space: OK 00:19:45.334 Temperature: OK 00:19:45.334 Device Reliability: OK 00:19:45.334 Read Only: No 00:19:45.334 Volatile Memory Backup: OK 00:19:45.334 Current Temperature: 323 Kelvin (50 Celsius) 00:19:45.334 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:45.334 Available Spare: 0% 00:19:45.334 Available Spare Threshold: 0% 00:19:45.334 Life Percentage Used: 0% 00:19:45.334 Data Units Read: 12249 00:19:45.334 Data Units Written: 12233 00:19:45.334 Host Read Commands: 295592 00:19:45.334 Host Write Commands: 295441 00:19:45.334 Controller Busy Time: 0 minutes 00:19:45.334 Power Cycles: 0 00:19:45.334 Power On Hours: 0 hours 00:19:45.334 Unsafe Shutdowns: 0 00:19:45.334 Unrecoverable Media Errors: 0 00:19:45.334 Lifetime Error Log Entries: 0 00:19:45.334 Warning Temperature Time: 0 minutes 00:19:45.334 Critical Temperature Time: 0 minutes 00:19:45.334 00:19:45.334 Number of Queues 00:19:45.334 ================ 00:19:45.334 Number of I/O Submission Queues: 64 00:19:45.334 Number of I/O Completion Queues: 64 00:19:45.334 00:19:45.334 ZNS Specific Controller Data 00:19:45.334 ============================ 00:19:45.334 Zone Append Size Limit: 0 00:19:45.334 00:19:45.334 00:19:45.334 Active Namespaces 00:19:45.334 ================= 00:19:45.334 Namespace ID:1 00:19:45.334 Error Recovery Timeout: Unlimited 00:19:45.334 Command Set Identifier: NVM (00h) 00:19:45.334 Deallocate: Supported 00:19:45.334 Deallocated/Unwritten Error: Supported 00:19:45.334 Deallocated Read Value: All 0x00 00:19:45.334 Deallocate in Write Zeroes: Not Supported 00:19:45.334 Deallocated Guard Field: 0xFFFF 00:19:45.335 Flush: Supported 00:19:45.335 Reservation: Not Supported 00:19:45.335 Namespace Sharing Capabilities: Private 00:19:45.335 Size (in LBAs): 1310720 (5GiB) 00:19:45.335 Capacity (in LBAs): 1310720 (5GiB) 00:19:45.335 Utilization (in LBAs): 1310720 (5GiB) 00:19:45.335 Thin Provisioning: Not Supported 00:19:45.335 Per-NS Atomic Units: No 00:19:45.335 Maximum Single Source Range Length: 128 00:19:45.335 Maximum Copy Length: 128 00:19:45.335 Maximum Source Range Count: 128 00:19:45.335 NGUID/EUI64 Never Reused: No 00:19:45.335 Namespace Write Protected: No 00:19:45.335 Number of LBA Formats: 8 00:19:45.335 Current LBA Format: LBA Format #04 00:19:45.335 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:45.335 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:45.335 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:45.335 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:45.335 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:45.335 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:45.335 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:45.335 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:45.335 00:19:45.335 NVM Specific Namespace Data 00:19:45.335 =========================== 00:19:45.335 Logical Block Storage Tag Mask: 0 00:19:45.335 Protection Information Capabilities: 00:19:45.335 16b Guard Protection Information Storage Tag Support: No 00:19:45.335 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:45.335 Storage Tag Check Read Support: No 00:19:45.335 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:45.335 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:45.335 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:45.335 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:45.335 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:45.335 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:45.335 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:45.335 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:45.335 00:19:45.335 real 0m1.373s 00:19:45.335 user 0m0.030s 00:19:45.335 sys 0m1.354s 00:19:45.335 18:33:37 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.335 18:33:37 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:45.335 ************************************ 00:19:45.335 END TEST nvme_identify 00:19:45.335 ************************************ 00:19:45.335 18:33:37 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:45.335 18:33:37 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:45.335 18:33:37 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:45.335 18:33:37 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.335 18:33:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:45.335 ************************************ 00:19:45.335 START TEST nvme_perf 00:19:45.335 ************************************ 00:19:45.335 18:33:37 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:19:45.335 18:33:37 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:45.900 EAL: TSC is not safe to use in SMP mode 00:19:45.900 EAL: TSC is not invariant 00:19:45.900 [2024-07-15 18:33:38.115513] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:46.835 Initializing NVMe Controllers 00:19:46.835 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:46.835 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:46.835 Initialization complete. Launching workers. 00:19:46.835 ======================================================== 00:19:46.835 Latency(us) 00:19:46.835 Device Information : IOPS MiB/s Average min max 00:19:46.835 PCIE (0000:00:10.0) NSID 1 from core 0: 92674.77 1086.03 1381.47 288.46 7944.32 00:19:46.835 ======================================================== 00:19:46.835 Total : 92674.77 1086.03 1381.47 288.46 7944.32 00:19:46.835 00:19:46.835 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:46.835 ================================================================================= 00:19:46.835 1.00000% : 1005.381us 00:19:46.835 10.00000% : 1079.853us 00:19:46.835 25.00000% : 1139.431us 00:19:46.835 50.00000% : 1251.140us 00:19:46.835 75.00000% : 1422.428us 00:19:46.835 90.00000% : 1593.715us 00:19:46.835 95.00000% : 1861.816us 00:19:46.835 98.00000% : 3961.945us 00:19:46.835 99.00000% : 4408.781us 00:19:46.835 99.50000% : 4706.671us 00:19:46.835 99.90000% : 7328.108us 00:19:46.835 99.99000% : 7983.468us 00:19:46.835 99.99900% : 7983.468us 00:19:46.835 99.99990% : 7983.468us 00:19:46.835 99.99999% : 7983.468us 00:19:46.835 00:19:46.835 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:46.835 ============================================================================== 00:19:46.835 Range in us Cumulative IO count 00:19:46.835 286.720 - 288.582: 0.0011% ( 1) 00:19:46.835 290.443 - 292.305: 0.0022% ( 1) 00:19:46.835 292.305 - 294.167: 0.0032% ( 1) 00:19:46.835 294.167 - 296.029: 0.0054% ( 2) 00:19:46.835 296.029 - 297.891: 0.0065% ( 1) 00:19:46.835 297.891 - 299.752: 0.0076% ( 1) 00:19:46.835 299.752 - 301.614: 0.0086% ( 1) 00:19:46.835 301.614 - 303.476: 0.0097% ( 1) 00:19:46.835 303.476 - 305.338: 0.0108% ( 1) 00:19:46.835 305.338 - 307.200: 0.0119% ( 1) 00:19:46.835 471.039 - 472.901: 0.0140% ( 2) 00:19:46.835 472.901 - 474.763: 0.0173% ( 3) 00:19:46.835 474.763 - 476.625: 0.0216% ( 4) 00:19:46.835 476.625 - 480.349: 0.0280% ( 6) 00:19:46.835 480.349 - 484.072: 0.0356% ( 7) 00:19:46.835 484.072 - 487.796: 0.0421% ( 6) 00:19:46.835 487.796 - 491.519: 0.0496% ( 7) 00:19:46.835 491.519 - 495.243: 0.0561% ( 6) 00:19:46.835 495.243 - 498.967: 0.0626% ( 6) 00:19:46.835 498.967 - 502.690: 0.0669% ( 4) 00:19:46.835 502.690 - 506.414: 0.0734% ( 6) 00:19:46.835 506.414 - 510.138: 0.0755% ( 2) 00:19:46.835 510.138 - 513.861: 0.0777% ( 2) 00:19:46.835 513.861 - 517.585: 0.0798% ( 2) 00:19:46.835 517.585 - 521.309: 0.0820% ( 2) 00:19:46.835 521.309 - 525.032: 0.0841% ( 2) 00:19:46.835 565.992 - 569.716: 0.0852% ( 1) 00:19:46.835 569.716 - 573.439: 0.0863% ( 1) 00:19:46.835 573.439 - 577.163: 0.0895% ( 3) 00:19:46.835 577.163 - 580.887: 0.0917% ( 2) 00:19:46.835 580.887 - 584.610: 0.0938% ( 2) 00:19:46.835 588.334 - 592.058: 0.0960% ( 2) 00:19:46.835 592.058 - 595.781: 0.0982% ( 2) 00:19:46.835 595.781 - 599.505: 0.1014% ( 3) 00:19:46.835 599.505 - 603.228: 0.1046% ( 3) 00:19:46.835 603.228 - 606.952: 0.1068% ( 2) 00:19:46.835 606.952 - 610.676: 0.1100% ( 3) 00:19:46.835 610.676 - 614.399: 0.1122% ( 2) 00:19:46.835 614.399 - 618.123: 0.1143% ( 2) 00:19:46.835 618.123 - 621.847: 0.1176% ( 3) 00:19:46.835 621.847 - 625.570: 0.1208% ( 3) 00:19:46.835 625.570 - 629.294: 0.1251% ( 4) 00:19:46.835 629.294 - 633.017: 0.1294% ( 4) 00:19:46.835 633.017 - 636.741: 0.1316% ( 2) 00:19:46.835 636.741 - 640.465: 0.1348% ( 3) 00:19:46.835 640.465 - 644.188: 0.1381% ( 3) 00:19:46.835 644.188 - 647.912: 0.1402% ( 2) 00:19:46.835 647.912 - 651.636: 0.1435% ( 3) 00:19:46.835 651.636 - 655.359: 0.1456% ( 2) 00:19:46.835 655.359 - 659.083: 0.1478% ( 2) 00:19:46.835 659.083 - 662.807: 0.1510% ( 3) 00:19:46.835 662.807 - 666.530: 0.1543% ( 3) 00:19:46.835 666.530 - 670.254: 0.1586% ( 4) 00:19:46.835 670.254 - 673.977: 0.1640% ( 5) 00:19:46.835 673.977 - 677.701: 0.1683% ( 4) 00:19:46.835 677.701 - 681.425: 0.1726% ( 4) 00:19:46.835 681.425 - 685.148: 0.1758% ( 3) 00:19:46.835 685.148 - 688.872: 0.1769% ( 1) 00:19:46.835 688.872 - 692.596: 0.1780% ( 1) 00:19:46.835 714.937 - 718.661: 0.1791% ( 1) 00:19:46.835 718.661 - 722.385: 0.1801% ( 1) 00:19:46.835 722.385 - 726.108: 0.1823% ( 2) 00:19:46.835 726.108 - 729.832: 0.1855% ( 3) 00:19:46.835 729.832 - 733.556: 0.1898% ( 4) 00:19:46.835 733.556 - 737.279: 0.1952% ( 5) 00:19:46.835 737.279 - 741.003: 0.1996% ( 4) 00:19:46.835 741.003 - 744.726: 0.2039% ( 4) 00:19:46.835 744.726 - 748.450: 0.2082% ( 4) 00:19:46.835 748.450 - 752.174: 0.2114% ( 3) 00:19:46.835 752.174 - 755.897: 0.2157% ( 4) 00:19:46.835 755.897 - 759.621: 0.2201% ( 4) 00:19:46.835 759.621 - 763.345: 0.2233% ( 3) 00:19:46.835 763.345 - 767.068: 0.2276% ( 4) 00:19:46.835 767.068 - 770.792: 0.2319% ( 4) 00:19:46.835 770.792 - 774.516: 0.2362% ( 4) 00:19:46.835 774.516 - 778.239: 0.2395% ( 3) 00:19:46.835 778.239 - 781.963: 0.2438% ( 4) 00:19:46.835 781.963 - 785.686: 0.2481% ( 4) 00:19:46.835 785.686 - 789.410: 0.2524% ( 4) 00:19:46.835 789.410 - 793.134: 0.2556% ( 3) 00:19:46.835 793.134 - 796.857: 0.2567% ( 1) 00:19:46.835 796.857 - 800.581: 0.2578% ( 1) 00:19:46.835 804.305 - 808.028: 0.2589% ( 1) 00:19:46.835 808.028 - 811.752: 0.2621% ( 3) 00:19:46.835 811.752 - 815.475: 0.2643% ( 2) 00:19:46.835 815.475 - 819.199: 0.2664% ( 2) 00:19:46.835 819.199 - 822.923: 0.2686% ( 2) 00:19:46.835 822.923 - 826.646: 0.2707% ( 2) 00:19:46.835 826.646 - 830.370: 0.2729% ( 2) 00:19:46.835 830.370 - 834.094: 0.2761% ( 3) 00:19:46.835 834.094 - 837.817: 0.2815% ( 5) 00:19:46.835 837.817 - 841.541: 0.2869% ( 5) 00:19:46.835 841.541 - 845.265: 0.2902% ( 3) 00:19:46.835 845.265 - 848.988: 0.2934% ( 3) 00:19:46.835 848.988 - 852.712: 0.2956% ( 2) 00:19:46.835 852.712 - 856.435: 0.2988% ( 3) 00:19:46.835 856.435 - 860.159: 0.3031% ( 4) 00:19:46.835 860.159 - 863.883: 0.3074% ( 4) 00:19:46.835 863.883 - 867.606: 0.3128% ( 5) 00:19:46.836 867.606 - 871.330: 0.3171% ( 4) 00:19:46.836 871.330 - 875.054: 0.3225% ( 5) 00:19:46.836 875.054 - 878.777: 0.3268% ( 4) 00:19:46.836 878.777 - 882.501: 0.3322% ( 5) 00:19:46.836 882.501 - 886.224: 0.3365% ( 4) 00:19:46.836 886.224 - 889.948: 0.3430% ( 6) 00:19:46.836 889.948 - 893.672: 0.3463% ( 3) 00:19:46.836 893.672 - 897.395: 0.3516% ( 5) 00:19:46.836 897.395 - 901.119: 0.3549% ( 3) 00:19:46.836 901.119 - 904.843: 0.3570% ( 2) 00:19:46.836 904.843 - 908.566: 0.3603% ( 3) 00:19:46.836 908.566 - 912.290: 0.3624% ( 2) 00:19:46.836 912.290 - 916.014: 0.3635% ( 1) 00:19:46.836 916.014 - 919.737: 0.3657% ( 2) 00:19:46.836 919.737 - 923.461: 0.3689% ( 3) 00:19:46.836 923.461 - 927.184: 0.3711% ( 2) 00:19:46.836 927.184 - 930.908: 0.3743% ( 3) 00:19:46.836 930.908 - 934.632: 0.3765% ( 2) 00:19:46.836 934.632 - 938.355: 0.3797% ( 3) 00:19:46.836 938.355 - 942.079: 0.3840% ( 4) 00:19:46.836 942.079 - 945.803: 0.3883% ( 4) 00:19:46.836 945.803 - 949.526: 0.3926% ( 4) 00:19:46.836 949.526 - 953.250: 0.3980% ( 5) 00:19:46.836 953.250 - 960.697: 0.4077% ( 9) 00:19:46.836 960.697 - 968.144: 0.4272% ( 18) 00:19:46.836 968.144 - 975.592: 0.4552% ( 26) 00:19:46.836 975.592 - 983.039: 0.5102% ( 51) 00:19:46.836 983.039 - 990.486: 0.6019% ( 85) 00:19:46.836 990.486 - 997.933: 0.7680% ( 154) 00:19:46.836 997.933 - 1005.381: 1.1003% ( 308) 00:19:46.836 1005.381 - 1012.828: 1.5490% ( 416) 00:19:46.836 1012.828 - 1020.275: 2.1379% ( 546) 00:19:46.836 1020.275 - 1027.723: 2.8941% ( 701) 00:19:46.836 1027.723 - 1035.170: 3.7711% ( 813) 00:19:46.836 1035.170 - 1042.617: 4.7785% ( 934) 00:19:46.836 1042.617 - 1050.064: 5.9101% ( 1049) 00:19:46.836 1050.064 - 1057.512: 7.1538% ( 1153) 00:19:46.836 1057.512 - 1064.959: 8.4698% ( 1220) 00:19:46.836 1064.959 - 1072.406: 9.9087% ( 1334) 00:19:46.836 1072.406 - 1079.853: 11.4761% ( 1453) 00:19:46.836 1079.853 - 1087.301: 13.1308% ( 1534) 00:19:46.836 1087.301 - 1094.748: 14.8513% ( 1595) 00:19:46.836 1094.748 - 1102.195: 16.6440% ( 1662) 00:19:46.836 1102.195 - 1109.642: 18.5242% ( 1743) 00:19:46.836 1109.642 - 1117.090: 20.4755% ( 1809) 00:19:46.836 1117.090 - 1124.537: 22.5034% ( 1880) 00:19:46.836 1124.537 - 1131.984: 24.5583% ( 1905) 00:19:46.836 1131.984 - 1139.431: 26.6099% ( 1902) 00:19:46.836 1139.431 - 1146.879: 28.5882% ( 1834) 00:19:46.836 1146.879 - 1154.326: 30.5169% ( 1788) 00:19:46.836 1154.326 - 1161.773: 32.3453% ( 1695) 00:19:46.836 1161.773 - 1169.221: 34.1121% ( 1638) 00:19:46.836 1169.221 - 1176.668: 35.7798% ( 1546) 00:19:46.836 1176.668 - 1184.115: 37.3956% ( 1498) 00:19:46.836 1184.115 - 1191.562: 38.9888% ( 1477) 00:19:46.836 1191.562 - 1199.010: 40.5508% ( 1448) 00:19:46.836 1199.010 - 1206.457: 42.1354% ( 1469) 00:19:46.836 1206.457 - 1213.904: 43.6595% ( 1413) 00:19:46.836 1213.904 - 1221.351: 45.1578% ( 1389) 00:19:46.836 1221.351 - 1228.799: 46.6237% ( 1359) 00:19:46.836 1228.799 - 1236.246: 48.0713% ( 1342) 00:19:46.836 1236.246 - 1243.693: 49.4941% ( 1319) 00:19:46.836 1243.693 - 1251.140: 50.8727% ( 1278) 00:19:46.836 1251.140 - 1258.588: 52.2426% ( 1270) 00:19:46.836 1258.588 - 1266.035: 53.5586% ( 1220) 00:19:46.836 1266.035 - 1273.482: 54.8163% ( 1166) 00:19:46.836 1273.482 - 1280.930: 56.0730% ( 1165) 00:19:46.836 1280.930 - 1288.377: 57.2735% ( 1113) 00:19:46.836 1288.377 - 1295.824: 58.4849% ( 1123) 00:19:46.836 1295.824 - 1303.271: 59.6466% ( 1077) 00:19:46.836 1303.271 - 1310.719: 60.7760% ( 1047) 00:19:46.836 1310.719 - 1318.166: 61.8806% ( 1024) 00:19:46.836 1318.166 - 1325.613: 62.9560% ( 997) 00:19:46.836 1325.613 - 1333.060: 64.0045% ( 972) 00:19:46.836 1333.060 - 1340.508: 65.0379% ( 958) 00:19:46.836 1340.508 - 1347.955: 66.0356% ( 925) 00:19:46.836 1347.955 - 1355.402: 67.0269% ( 919) 00:19:46.836 1355.402 - 1362.849: 67.9417% ( 848) 00:19:46.836 1362.849 - 1370.297: 68.8596% ( 851) 00:19:46.836 1370.297 - 1377.744: 69.7301% ( 807) 00:19:46.836 1377.744 - 1385.191: 70.6319% ( 836) 00:19:46.836 1385.191 - 1392.638: 71.5358% ( 838) 00:19:46.836 1392.638 - 1400.086: 72.4139% ( 814) 00:19:46.836 1400.086 - 1407.533: 73.3038% ( 825) 00:19:46.836 1407.533 - 1414.980: 74.1678% ( 801) 00:19:46.836 1414.980 - 1422.428: 75.0426% ( 811) 00:19:46.836 1422.428 - 1429.875: 75.8700% ( 767) 00:19:46.836 1429.875 - 1437.322: 76.7210% ( 789) 00:19:46.836 1437.322 - 1444.769: 77.5549% ( 773) 00:19:46.836 1444.769 - 1452.217: 78.3930% ( 777) 00:19:46.836 1452.217 - 1459.664: 79.1998% ( 748) 00:19:46.836 1459.664 - 1467.111: 80.0196% ( 760) 00:19:46.836 1467.111 - 1474.558: 80.8297% ( 751) 00:19:46.836 1474.558 - 1482.006: 81.6409% ( 752) 00:19:46.836 1482.006 - 1489.453: 82.4359% ( 737) 00:19:46.836 1489.453 - 1496.900: 83.1640% ( 675) 00:19:46.836 1496.900 - 1504.347: 83.8533% ( 639) 00:19:46.836 1504.347 - 1511.795: 84.5080% ( 607) 00:19:46.836 1511.795 - 1519.242: 85.1390% ( 585) 00:19:46.836 1519.242 - 1526.689: 85.7485% ( 565) 00:19:46.836 1526.689 - 1534.137: 86.3526% ( 560) 00:19:46.836 1534.137 - 1541.584: 86.9232% ( 529) 00:19:46.836 1541.584 - 1549.031: 87.4711% ( 508) 00:19:46.836 1549.031 - 1556.478: 87.9717% ( 464) 00:19:46.836 1556.478 - 1563.926: 88.4538% ( 447) 00:19:46.836 1563.926 - 1571.373: 88.9079% ( 421) 00:19:46.836 1571.373 - 1578.820: 89.3610% ( 420) 00:19:46.836 1578.820 - 1586.267: 89.8043% ( 411) 00:19:46.836 1586.267 - 1593.715: 90.2207% ( 386) 00:19:46.836 1593.715 - 1601.162: 90.5874% ( 340) 00:19:46.836 1601.162 - 1608.609: 90.9294% ( 317) 00:19:46.836 1608.609 - 1616.056: 91.2454% ( 293) 00:19:46.836 1616.056 - 1623.504: 91.5248% ( 259) 00:19:46.836 1623.504 - 1630.951: 91.7891% ( 245) 00:19:46.836 1630.951 - 1638.398: 92.0361% ( 229) 00:19:46.836 1638.398 - 1645.845: 92.2508% ( 199) 00:19:46.836 1645.845 - 1653.293: 92.4428% ( 178) 00:19:46.836 1653.293 - 1660.740: 92.6466% ( 189) 00:19:46.836 1660.740 - 1668.187: 92.8376% ( 177) 00:19:46.836 1668.187 - 1675.635: 93.0188% ( 168) 00:19:46.836 1675.635 - 1683.082: 93.1784% ( 148) 00:19:46.836 1683.082 - 1690.529: 93.3370% ( 147) 00:19:46.836 1690.529 - 1697.976: 93.4988% ( 150) 00:19:46.836 1697.976 - 1705.424: 93.6401% ( 131) 00:19:46.836 1705.424 - 1712.871: 93.7577% ( 109) 00:19:46.836 1712.871 - 1720.318: 93.8580% ( 93) 00:19:46.836 1720.318 - 1727.765: 93.9389% ( 75) 00:19:46.836 1727.765 - 1735.213: 94.0241% ( 79) 00:19:46.836 1735.213 - 1742.660: 94.1180% ( 87) 00:19:46.836 1742.660 - 1750.107: 94.2032% ( 79) 00:19:46.836 1750.107 - 1757.554: 94.2852% ( 76) 00:19:46.836 1757.554 - 1765.002: 94.3682% ( 77) 00:19:46.836 1765.002 - 1772.449: 94.4459% ( 72) 00:19:46.836 1772.449 - 1779.896: 94.5095% ( 59) 00:19:46.836 1779.896 - 1787.344: 94.5645% ( 51) 00:19:46.836 1787.344 - 1794.791: 94.6195% ( 51) 00:19:46.836 1794.791 - 1802.238: 94.6735% ( 50) 00:19:46.836 1802.238 - 1809.685: 94.7307% ( 53) 00:19:46.837 1809.685 - 1817.133: 94.7846% ( 50) 00:19:46.837 1817.133 - 1824.580: 94.8364% ( 48) 00:19:46.837 1824.580 - 1832.027: 94.8827% ( 43) 00:19:46.837 1832.027 - 1839.474: 94.9194% ( 34) 00:19:46.837 1839.474 - 1846.922: 94.9561% ( 34) 00:19:46.837 1846.922 - 1854.369: 94.9798% ( 22) 00:19:46.837 1854.369 - 1861.816: 95.0036% ( 22) 00:19:46.837 1861.816 - 1869.263: 95.0219% ( 17) 00:19:46.837 1869.263 - 1876.711: 95.0381% ( 15) 00:19:46.837 1876.711 - 1884.158: 95.0629% ( 23) 00:19:46.837 1884.158 - 1891.605: 95.0942% ( 29) 00:19:46.837 1891.605 - 1899.052: 95.1211% ( 25) 00:19:46.837 1899.052 - 1906.500: 95.1503% ( 27) 00:19:46.837 1906.500 - 1921.394: 95.2085% ( 54) 00:19:46.837 1921.394 - 1936.289: 95.2700% ( 57) 00:19:46.837 1936.289 - 1951.183: 95.3509% ( 75) 00:19:46.837 1951.183 - 1966.078: 95.4286% ( 72) 00:19:46.837 1966.078 - 1980.972: 95.4922% ( 59) 00:19:46.837 1980.972 - 1995.867: 95.5494% ( 53) 00:19:46.837 1995.867 - 2010.761: 95.6011% ( 48) 00:19:46.837 2010.761 - 2025.656: 95.6518% ( 47) 00:19:46.837 2025.656 - 2040.551: 95.6950% ( 40) 00:19:46.837 2040.551 - 2055.445: 95.7274% ( 30) 00:19:46.837 2055.445 - 2070.340: 95.7651% ( 35) 00:19:46.837 2070.340 - 2085.234: 95.8083% ( 40) 00:19:46.837 2085.234 - 2100.129: 95.8460% ( 35) 00:19:46.837 2100.129 - 2115.023: 95.8730% ( 25) 00:19:46.837 2115.023 - 2129.918: 95.8881% ( 14) 00:19:46.837 2129.918 - 2144.812: 95.9064% ( 17) 00:19:46.837 2144.812 - 2159.707: 95.9323% ( 24) 00:19:46.837 2159.707 - 2174.601: 95.9582% ( 24) 00:19:46.837 2174.601 - 2189.496: 95.9927% ( 32) 00:19:46.837 2189.496 - 2204.390: 96.0251% ( 30) 00:19:46.837 2204.390 - 2219.285: 96.0466% ( 20) 00:19:46.837 2219.285 - 2234.179: 96.0736% ( 25) 00:19:46.837 2234.179 - 2249.074: 96.1103% ( 34) 00:19:46.837 2249.074 - 2263.968: 96.1502% ( 37) 00:19:46.837 2263.968 - 2278.863: 96.1815% ( 29) 00:19:46.837 2278.863 - 2293.757: 96.2117% ( 28) 00:19:46.837 2293.757 - 2308.652: 96.2300% ( 17) 00:19:46.837 2308.652 - 2323.547: 96.2473% ( 16) 00:19:46.837 2323.547 - 2338.441: 96.2505% ( 3) 00:19:46.837 2338.441 - 2353.336: 96.2537% ( 3) 00:19:46.837 2353.336 - 2368.230: 96.2570% ( 3) 00:19:46.837 2368.230 - 2383.125: 96.2591% ( 2) 00:19:46.837 2383.125 - 2398.019: 96.2624% ( 3) 00:19:46.837 2398.019 - 2412.914: 96.2656% ( 3) 00:19:46.837 2412.914 - 2427.808: 96.2678% ( 2) 00:19:46.837 2427.808 - 2442.703: 96.2699% ( 2) 00:19:46.837 2442.703 - 2457.597: 96.2721% ( 2) 00:19:46.837 2457.597 - 2472.492: 96.2872% ( 14) 00:19:46.837 2472.492 - 2487.386: 96.3044% ( 16) 00:19:46.837 2487.386 - 2502.281: 96.3228% ( 17) 00:19:46.837 2502.281 - 2517.175: 96.3411% ( 17) 00:19:46.837 2517.175 - 2532.070: 96.3724% ( 29) 00:19:46.837 2532.070 - 2546.964: 96.4037% ( 29) 00:19:46.837 2546.964 - 2561.859: 96.4360% ( 30) 00:19:46.837 2561.859 - 2576.754: 96.4695% ( 31) 00:19:46.837 2576.754 - 2591.648: 96.4997% ( 28) 00:19:46.837 2591.648 - 2606.543: 96.5288% ( 27) 00:19:46.837 2606.543 - 2621.437: 96.5461% ( 16) 00:19:46.837 2621.437 - 2636.332: 96.5644% ( 17) 00:19:46.837 2636.332 - 2651.226: 96.5806% ( 15) 00:19:46.837 2651.226 - 2666.121: 96.5903% ( 9) 00:19:46.837 2666.121 - 2681.015: 96.5935% ( 3) 00:19:46.837 2681.015 - 2695.910: 96.5957% ( 2) 00:19:46.837 2695.910 - 2710.804: 96.6032% ( 7) 00:19:46.837 2710.804 - 2725.699: 96.6140% ( 10) 00:19:46.837 2725.699 - 2740.593: 96.6183% ( 4) 00:19:46.837 2740.593 - 2755.488: 96.6205% ( 2) 00:19:46.837 2755.488 - 2770.382: 96.6237% ( 3) 00:19:46.837 2770.382 - 2785.277: 96.6259% ( 2) 00:19:46.837 2785.277 - 2800.171: 96.6334% ( 7) 00:19:46.837 2800.171 - 2815.066: 96.6496% ( 15) 00:19:46.837 2815.066 - 2829.961: 96.6669% ( 16) 00:19:46.837 2829.961 - 2844.855: 96.6841% ( 16) 00:19:46.837 2844.855 - 2859.750: 96.7014% ( 16) 00:19:46.837 2859.750 - 2874.644: 96.7187% ( 16) 00:19:46.837 2874.644 - 2889.539: 96.7338% ( 14) 00:19:46.837 2889.539 - 2904.433: 96.7510% ( 16) 00:19:46.837 2904.433 - 2919.328: 96.7650% ( 13) 00:19:46.837 2919.328 - 2934.222: 96.7812% ( 15) 00:19:46.837 2934.222 - 2949.117: 96.7866% ( 5) 00:19:46.837 2949.117 - 2964.011: 96.7952% ( 8) 00:19:46.837 2964.011 - 2978.906: 96.8039% ( 8) 00:19:46.837 2978.906 - 2993.800: 96.8136% ( 9) 00:19:46.837 2993.800 - 3008.695: 96.8308% ( 16) 00:19:46.837 3008.695 - 3023.589: 96.8405% ( 9) 00:19:46.837 3023.589 - 3038.484: 96.8535% ( 12) 00:19:46.837 3038.484 - 3053.378: 96.8783% ( 23) 00:19:46.837 3053.378 - 3068.273: 96.9063% ( 26) 00:19:46.837 3068.273 - 3083.168: 96.9333% ( 25) 00:19:46.837 3083.168 - 3098.062: 96.9614% ( 26) 00:19:46.837 3098.062 - 3112.957: 96.9786% ( 16) 00:19:46.837 3112.957 - 3127.851: 96.9980% ( 18) 00:19:46.837 3127.851 - 3142.746: 97.0175% ( 18) 00:19:46.837 3142.746 - 3157.640: 97.0433% ( 24) 00:19:46.837 3157.640 - 3172.535: 97.0617% ( 17) 00:19:46.837 3172.535 - 3187.429: 97.0768% ( 14) 00:19:46.837 3187.429 - 3202.324: 97.0865% ( 9) 00:19:46.837 3202.324 - 3217.218: 97.0951% ( 8) 00:19:46.837 3217.218 - 3232.113: 97.1005% ( 5) 00:19:46.837 3247.007 - 3261.902: 97.1059% ( 5) 00:19:46.837 3261.902 - 3276.796: 97.1145% ( 8) 00:19:46.837 3276.796 - 3291.691: 97.1232% ( 8) 00:19:46.837 3291.691 - 3306.585: 97.1296% ( 6) 00:19:46.837 3306.585 - 3321.480: 97.1361% ( 6) 00:19:46.837 3321.480 - 3336.375: 97.1447% ( 8) 00:19:46.837 3336.375 - 3351.269: 97.1534% ( 8) 00:19:46.837 3351.269 - 3366.164: 97.1631% ( 9) 00:19:46.837 3366.164 - 3381.058: 97.1728% ( 9) 00:19:46.837 3381.058 - 3395.953: 97.1814% ( 8) 00:19:46.837 3395.953 - 3410.847: 97.1954% ( 13) 00:19:46.837 3410.847 - 3425.742: 97.2256% ( 28) 00:19:46.837 3425.742 - 3440.636: 97.2677% ( 39) 00:19:46.837 3440.636 - 3455.531: 97.3065% ( 36) 00:19:46.837 3455.531 - 3470.425: 97.3508% ( 41) 00:19:46.837 3470.425 - 3485.320: 97.3842% ( 31) 00:19:46.837 3485.320 - 3500.214: 97.4122% ( 26) 00:19:46.837 3500.214 - 3515.109: 97.4360% ( 22) 00:19:46.837 3515.109 - 3530.003: 97.4608% ( 23) 00:19:46.837 3530.003 - 3544.898: 97.4845% ( 22) 00:19:46.837 3544.898 - 3559.792: 97.4953% ( 10) 00:19:46.837 3559.792 - 3574.687: 97.5039% ( 8) 00:19:46.837 3574.687 - 3589.582: 97.5136% ( 9) 00:19:46.837 3589.582 - 3604.476: 97.5147% ( 1) 00:19:46.837 3604.476 - 3619.371: 97.5223% ( 7) 00:19:46.837 3619.371 - 3634.265: 97.5406% ( 17) 00:19:46.837 3634.265 - 3649.160: 97.5633% ( 21) 00:19:46.837 3649.160 - 3664.054: 97.5913% ( 26) 00:19:46.837 3664.054 - 3678.949: 97.6204% ( 27) 00:19:46.837 3678.949 - 3693.843: 97.6474% ( 25) 00:19:46.837 3693.843 - 3708.738: 97.6733% ( 24) 00:19:46.837 3708.738 - 3723.632: 97.6959% ( 21) 00:19:46.837 3723.632 - 3738.527: 97.7208% ( 23) 00:19:46.837 3738.527 - 3753.421: 97.7445% ( 22) 00:19:46.837 3753.421 - 3768.316: 97.7693% ( 23) 00:19:46.837 3768.316 - 3783.210: 97.7887% ( 18) 00:19:46.837 3783.210 - 3798.105: 97.8081% ( 18) 00:19:46.837 3798.105 - 3812.999: 97.8232% ( 14) 00:19:46.837 3812.999 - 3842.789: 97.8610% ( 35) 00:19:46.837 3842.789 - 3872.578: 97.9052% ( 41) 00:19:46.837 3872.578 - 3902.367: 97.9505% ( 42) 00:19:46.837 3902.367 - 3932.156: 97.9742% ( 22) 00:19:46.837 3932.156 - 3961.945: 98.0001% ( 24) 00:19:46.837 3961.945 - 3991.734: 98.0131% ( 12) 00:19:46.837 3991.734 - 4021.523: 98.0487% ( 33) 00:19:46.837 4021.523 - 4051.312: 98.0789% ( 28) 00:19:46.837 4051.312 - 4081.101: 98.1080% ( 27) 00:19:46.837 4081.101 - 4110.890: 98.1414% ( 31) 00:19:46.837 4110.890 - 4140.679: 98.1900% ( 45) 00:19:46.837 4140.679 - 4170.468: 98.2471% ( 53) 00:19:46.837 4170.468 - 4200.257: 98.3097% ( 58) 00:19:46.837 4200.257 - 4230.046: 98.3928% ( 77) 00:19:46.837 4230.046 - 4259.835: 98.5050% ( 104) 00:19:46.837 4259.835 - 4289.624: 98.6430% ( 128) 00:19:46.837 4289.624 - 4319.413: 98.7606% ( 109) 00:19:46.837 4319.413 - 4349.203: 98.8771% ( 108) 00:19:46.837 4349.203 - 4378.992: 98.9947% ( 109) 00:19:46.837 4378.992 - 4408.781: 99.1220% ( 118) 00:19:46.837 4408.781 - 4438.570: 99.2169% ( 88) 00:19:46.837 4438.570 - 4468.359: 99.2503% ( 31) 00:19:46.837 4468.359 - 4498.148: 99.2719% ( 20) 00:19:46.837 4498.148 - 4527.937: 99.2945% ( 21) 00:19:46.837 4527.937 - 4557.726: 99.3301% ( 33) 00:19:46.837 4557.726 - 4587.515: 99.3614% ( 29) 00:19:46.837 4587.515 - 4617.304: 99.4003% ( 36) 00:19:46.837 4617.304 - 4647.093: 99.4466% ( 43) 00:19:46.837 4647.093 - 4676.882: 99.4919% ( 42) 00:19:46.837 4676.882 - 4706.671: 99.5135% ( 20) 00:19:46.837 4706.671 - 4736.460: 99.5329% ( 18) 00:19:46.837 4736.460 - 4766.249: 99.5513% ( 17) 00:19:46.837 4766.249 - 4796.038: 99.5674% ( 15) 00:19:46.837 4796.038 - 4825.827: 99.5836% ( 15) 00:19:46.837 4825.827 - 4855.617: 99.5858% ( 2) 00:19:46.837 4915.195 - 4944.984: 99.5923% ( 6) 00:19:46.837 4944.984 - 4974.773: 99.6095% ( 16) 00:19:46.837 4974.773 - 5004.562: 99.6106% ( 1) 00:19:46.837 5093.929 - 5123.718: 99.6128% ( 2) 00:19:46.837 5123.718 - 5153.507: 99.6300% ( 16) 00:19:46.837 5153.507 - 5183.296: 99.6484% ( 17) 00:19:46.837 5183.296 - 5213.085: 99.6667% ( 17) 00:19:46.837 5213.085 - 5242.874: 99.6850% ( 17) 00:19:46.837 5242.874 - 5272.663: 99.7023% ( 16) 00:19:46.837 5272.663 - 5302.452: 99.7206% ( 17) 00:19:46.837 5302.452 - 5332.241: 99.7239% ( 3) 00:19:46.837 5779.077 - 5808.866: 99.7303% ( 6) 00:19:46.837 5808.866 - 5838.655: 99.7476% ( 16) 00:19:46.837 5838.655 - 5868.445: 99.7659% ( 17) 00:19:46.837 5868.445 - 5898.234: 99.8015% ( 33) 00:19:46.837 5898.234 - 5928.023: 99.8274% ( 24) 00:19:46.837 5928.023 - 5957.812: 99.8457% ( 17) 00:19:46.837 5957.812 - 5987.601: 99.8619% ( 15) 00:19:46.837 7238.741 - 7268.530: 99.8760% ( 13) 00:19:46.837 7268.530 - 7298.319: 99.8975% ( 20) 00:19:46.837 7298.319 - 7328.108: 99.9202% ( 21) 00:19:46.837 7328.108 - 7357.897: 99.9418% ( 20) 00:19:46.837 7357.897 - 7387.686: 99.9601% ( 17) 00:19:46.837 7864.311 - 7923.890: 99.9849% ( 23) 00:19:46.838 7923.890 - 7983.468: 100.0000% ( 14) 00:19:46.838 00:19:46.838 18:33:39 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:47.452 EAL: TSC is not safe to use in SMP mode 00:19:47.452 EAL: TSC is not invariant 00:19:47.452 [2024-07-15 18:33:39.768446] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:48.389 Initializing NVMe Controllers 00:19:48.389 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:48.389 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:48.389 Initialization complete. Launching workers. 00:19:48.389 ======================================================== 00:19:48.389 Latency(us) 00:19:48.389 Device Information : IOPS MiB/s Average min max 00:19:48.389 PCIE (0000:00:10.0) NSID 1 from core 0: 70121.29 821.73 1825.43 290.85 4935.10 00:19:48.389 ======================================================== 00:19:48.389 Total : 70121.29 821.73 1825.43 290.85 4935.10 00:19:48.389 00:19:48.389 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:48.389 ================================================================================= 00:19:48.389 1.00000% : 1414.980us 00:19:48.389 10.00000% : 1630.951us 00:19:48.389 25.00000% : 1720.318us 00:19:48.389 50.00000% : 1817.133us 00:19:48.389 75.00000% : 1936.289us 00:19:48.389 90.00000% : 2025.656us 00:19:48.389 95.00000% : 2100.129us 00:19:48.389 98.00000% : 2219.285us 00:19:48.389 99.00000% : 2442.703us 00:19:48.389 99.50000% : 2695.910us 00:19:48.389 99.90000% : 3395.953us 00:19:48.389 99.99000% : 4289.624us 00:19:48.389 99.99900% : 4944.984us 00:19:48.389 99.99990% : 4944.984us 00:19:48.389 99.99999% : 4944.984us 00:19:48.389 00:19:48.389 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:48.389 ============================================================================== 00:19:48.389 Range in us Cumulative IO count 00:19:48.389 290.443 - 292.305: 0.0028% ( 2) 00:19:48.389 301.614 - 303.476: 0.0043% ( 1) 00:19:48.389 305.338 - 307.200: 0.0057% ( 1) 00:19:48.389 351.883 - 353.745: 0.0071% ( 1) 00:19:48.389 370.501 - 372.363: 0.0100% ( 2) 00:19:48.389 372.363 - 374.225: 0.0128% ( 2) 00:19:48.389 374.225 - 376.087: 0.0142% ( 1) 00:19:48.389 376.087 - 377.949: 0.0157% ( 1) 00:19:48.389 513.861 - 517.585: 0.0185% ( 2) 00:19:48.389 621.847 - 625.570: 0.0199% ( 1) 00:19:48.389 629.294 - 633.017: 0.0214% ( 1) 00:19:48.389 633.017 - 636.741: 0.0242% ( 2) 00:19:48.389 636.741 - 640.465: 0.0285% ( 3) 00:19:48.389 640.465 - 644.188: 0.0328% ( 3) 00:19:48.389 644.188 - 647.912: 0.0385% ( 4) 00:19:48.389 647.912 - 651.636: 0.0427% ( 3) 00:19:48.389 651.636 - 655.359: 0.0456% ( 2) 00:19:48.389 655.359 - 659.083: 0.0499% ( 3) 00:19:48.389 659.083 - 662.807: 0.0541% ( 3) 00:19:48.389 662.807 - 666.530: 0.0556% ( 1) 00:19:48.389 666.530 - 670.254: 0.0570% ( 1) 00:19:48.389 670.254 - 673.977: 0.0584% ( 1) 00:19:48.389 673.977 - 677.701: 0.0613% ( 2) 00:19:48.389 677.701 - 681.425: 0.0627% ( 1) 00:19:48.389 681.425 - 685.148: 0.0641% ( 1) 00:19:48.389 688.872 - 692.596: 0.0655% ( 1) 00:19:48.389 696.319 - 700.043: 0.0698% ( 3) 00:19:48.389 700.043 - 703.767: 0.0798% ( 7) 00:19:48.389 703.767 - 707.490: 0.0841% ( 3) 00:19:48.389 707.490 - 711.214: 0.0855% ( 1) 00:19:48.389 744.726 - 748.450: 0.0898% ( 3) 00:19:48.389 748.450 - 752.174: 0.0940% ( 3) 00:19:48.389 752.174 - 755.897: 0.1054% ( 8) 00:19:48.389 755.897 - 759.621: 0.1097% ( 3) 00:19:48.389 759.621 - 763.345: 0.1140% ( 3) 00:19:48.389 763.345 - 767.068: 0.1183% ( 3) 00:19:48.389 767.068 - 770.792: 0.1197% ( 1) 00:19:48.389 770.792 - 774.516: 0.1211% ( 1) 00:19:48.389 875.054 - 878.777: 0.1240% ( 2) 00:19:48.389 878.777 - 882.501: 0.1268% ( 2) 00:19:48.389 882.501 - 886.224: 0.1311% ( 3) 00:19:48.389 886.224 - 889.948: 0.1354% ( 3) 00:19:48.389 889.948 - 893.672: 0.1396% ( 3) 00:19:48.389 893.672 - 897.395: 0.1453% ( 4) 00:19:48.389 897.395 - 901.119: 0.1482% ( 2) 00:19:48.389 901.119 - 904.843: 0.1496% ( 1) 00:19:48.389 904.843 - 908.566: 0.1510% ( 1) 00:19:48.389 908.566 - 912.290: 0.1525% ( 1) 00:19:48.389 912.290 - 916.014: 0.1539% ( 1) 00:19:48.389 916.014 - 919.737: 0.1553% ( 1) 00:19:48.389 919.737 - 923.461: 0.1582% ( 2) 00:19:48.389 923.461 - 927.184: 0.1624% ( 3) 00:19:48.389 927.184 - 930.908: 0.1710% ( 6) 00:19:48.389 930.908 - 934.632: 0.1795% ( 6) 00:19:48.389 934.632 - 938.355: 0.1867% ( 5) 00:19:48.389 938.355 - 942.079: 0.1881% ( 1) 00:19:48.389 942.079 - 945.803: 0.1909% ( 2) 00:19:48.389 945.803 - 949.526: 0.1924% ( 1) 00:19:48.389 949.526 - 953.250: 0.1952% ( 2) 00:19:48.389 953.250 - 960.697: 0.1995% ( 3) 00:19:48.389 960.697 - 968.144: 0.2066% ( 5) 00:19:48.389 968.144 - 975.592: 0.2337% ( 19) 00:19:48.389 975.592 - 983.039: 0.2522% ( 13) 00:19:48.389 983.039 - 990.486: 0.2807% ( 20) 00:19:48.389 990.486 - 997.933: 0.2978% ( 12) 00:19:48.389 997.933 - 1005.381: 0.3063% ( 6) 00:19:48.389 1005.381 - 1012.828: 0.3135% ( 5) 00:19:48.389 1012.828 - 1020.275: 0.3192% ( 4) 00:19:48.389 1020.275 - 1027.723: 0.3220% ( 2) 00:19:48.389 1236.246 - 1243.693: 0.3249% ( 2) 00:19:48.389 1243.693 - 1251.140: 0.3420% ( 12) 00:19:48.389 1251.140 - 1258.588: 0.3633% ( 15) 00:19:48.389 1258.588 - 1266.035: 0.3819% ( 13) 00:19:48.389 1266.035 - 1273.482: 0.4075% ( 18) 00:19:48.389 1273.482 - 1280.930: 0.4203% ( 9) 00:19:48.389 1280.930 - 1288.377: 0.4346% ( 10) 00:19:48.389 1288.377 - 1295.824: 0.4474% ( 9) 00:19:48.389 1295.824 - 1303.271: 0.4588% ( 8) 00:19:48.389 1303.271 - 1310.719: 0.4844% ( 18) 00:19:48.389 1310.719 - 1318.166: 0.5329% ( 34) 00:19:48.389 1318.166 - 1325.613: 0.5742% ( 29) 00:19:48.389 1325.613 - 1333.060: 0.6141% ( 28) 00:19:48.389 1333.060 - 1340.508: 0.6569% ( 30) 00:19:48.389 1340.508 - 1347.955: 0.6982% ( 29) 00:19:48.389 1347.955 - 1355.402: 0.7338% ( 25) 00:19:48.389 1355.402 - 1362.849: 0.7609% ( 19) 00:19:48.389 1362.849 - 1370.297: 0.8008% ( 28) 00:19:48.390 1370.297 - 1377.744: 0.8307% ( 21) 00:19:48.390 1377.744 - 1385.191: 0.8506% ( 14) 00:19:48.390 1385.191 - 1392.638: 0.8791% ( 20) 00:19:48.390 1392.638 - 1400.086: 0.9119% ( 23) 00:19:48.390 1400.086 - 1407.533: 0.9603% ( 34) 00:19:48.390 1407.533 - 1414.980: 1.0059% ( 32) 00:19:48.390 1414.980 - 1422.428: 1.0530% ( 33) 00:19:48.390 1422.428 - 1429.875: 1.0957% ( 30) 00:19:48.390 1429.875 - 1437.322: 1.1427% ( 33) 00:19:48.390 1437.322 - 1444.769: 1.2197% ( 54) 00:19:48.390 1444.769 - 1452.217: 1.2895% ( 49) 00:19:48.390 1452.217 - 1459.664: 1.3807% ( 64) 00:19:48.390 1459.664 - 1467.111: 1.4804% ( 70) 00:19:48.390 1467.111 - 1474.558: 1.6030% ( 86) 00:19:48.390 1474.558 - 1482.006: 1.7668% ( 115) 00:19:48.390 1482.006 - 1489.453: 1.9577% ( 134) 00:19:48.390 1489.453 - 1496.900: 2.1757% ( 153) 00:19:48.390 1496.900 - 1504.347: 2.3980% ( 156) 00:19:48.390 1504.347 - 1511.795: 2.6246% ( 159) 00:19:48.390 1511.795 - 1519.242: 2.8739% ( 175) 00:19:48.390 1519.242 - 1526.689: 3.1218% ( 174) 00:19:48.390 1526.689 - 1534.137: 3.3413% ( 154) 00:19:48.390 1534.137 - 1541.584: 3.6020% ( 183) 00:19:48.390 1541.584 - 1549.031: 3.8927% ( 204) 00:19:48.390 1549.031 - 1556.478: 4.2076% ( 221) 00:19:48.390 1556.478 - 1563.926: 4.5609% ( 248) 00:19:48.390 1563.926 - 1571.373: 4.9086% ( 244) 00:19:48.390 1571.373 - 1578.820: 5.3133% ( 284) 00:19:48.390 1578.820 - 1586.267: 5.7806% ( 328) 00:19:48.390 1586.267 - 1593.715: 6.3406% ( 393) 00:19:48.390 1593.715 - 1601.162: 7.0003% ( 463) 00:19:48.390 1601.162 - 1608.609: 7.7497% ( 526) 00:19:48.390 1608.609 - 1616.056: 8.5989% ( 596) 00:19:48.390 1616.056 - 1623.504: 9.5778% ( 687) 00:19:48.390 1623.504 - 1630.951: 10.5681% ( 695) 00:19:48.390 1630.951 - 1638.398: 11.5683% ( 702) 00:19:48.390 1638.398 - 1645.845: 12.5800% ( 710) 00:19:48.390 1645.845 - 1653.293: 13.6714% ( 766) 00:19:48.390 1653.293 - 1660.740: 14.7771% ( 776) 00:19:48.390 1660.740 - 1668.187: 15.9341% ( 812) 00:19:48.390 1668.187 - 1675.635: 17.1381% ( 845) 00:19:48.390 1675.635 - 1683.082: 18.4104% ( 893) 00:19:48.390 1683.082 - 1690.529: 19.7298% ( 926) 00:19:48.390 1690.529 - 1697.976: 21.2402% ( 1060) 00:19:48.390 1697.976 - 1705.424: 22.8474% ( 1128) 00:19:48.390 1705.424 - 1712.871: 24.5430% ( 1190) 00:19:48.390 1712.871 - 1720.318: 26.2257% ( 1181) 00:19:48.390 1720.318 - 1727.765: 27.8857% ( 1165) 00:19:48.390 1727.765 - 1735.213: 29.5926% ( 1198) 00:19:48.390 1735.213 - 1742.660: 31.3309% ( 1220) 00:19:48.390 1742.660 - 1750.107: 33.1605% ( 1284) 00:19:48.390 1750.107 - 1757.554: 34.9714% ( 1271) 00:19:48.390 1757.554 - 1765.002: 36.7539% ( 1251) 00:19:48.390 1765.002 - 1772.449: 38.6205% ( 1310) 00:19:48.390 1772.449 - 1779.896: 40.4913% ( 1313) 00:19:48.390 1779.896 - 1787.344: 42.4718% ( 1390) 00:19:48.390 1787.344 - 1794.791: 44.5165% ( 1435) 00:19:48.390 1794.791 - 1802.238: 46.4528% ( 1359) 00:19:48.390 1802.238 - 1809.685: 48.3636% ( 1341) 00:19:48.390 1809.685 - 1817.133: 50.2372% ( 1315) 00:19:48.390 1817.133 - 1824.580: 52.1494% ( 1342) 00:19:48.390 1824.580 - 1832.027: 53.9319% ( 1251) 00:19:48.390 1832.027 - 1839.474: 55.7756% ( 1294) 00:19:48.390 1839.474 - 1846.922: 57.5909% ( 1274) 00:19:48.390 1846.922 - 1854.369: 59.4004% ( 1270) 00:19:48.390 1854.369 - 1861.816: 61.2128% ( 1272) 00:19:48.390 1861.816 - 1869.263: 62.9882% ( 1246) 00:19:48.390 1869.263 - 1876.711: 64.8519% ( 1308) 00:19:48.390 1876.711 - 1884.158: 66.6543% ( 1265) 00:19:48.390 1884.158 - 1891.605: 68.3698% ( 1204) 00:19:48.390 1891.605 - 1899.052: 70.0141% ( 1154) 00:19:48.390 1899.052 - 1906.500: 71.6213% ( 1128) 00:19:48.390 1906.500 - 1921.394: 74.7019% ( 2162) 00:19:48.390 1921.394 - 1936.289: 77.7325% ( 2127) 00:19:48.390 1936.289 - 1951.183: 80.5409% ( 1971) 00:19:48.390 1951.183 - 1966.078: 83.1669% ( 1843) 00:19:48.390 1966.078 - 1980.972: 85.5350% ( 1662) 00:19:48.390 1980.972 - 1995.867: 87.5098% ( 1386) 00:19:48.390 1995.867 - 2010.761: 89.2339% ( 1210) 00:19:48.390 2010.761 - 2025.656: 90.7413% ( 1058) 00:19:48.390 2025.656 - 2040.551: 91.9767% ( 867) 00:19:48.390 2040.551 - 2055.445: 93.0752% ( 771) 00:19:48.390 2055.445 - 2070.340: 94.0057% ( 653) 00:19:48.390 2070.340 - 2085.234: 94.8335% ( 581) 00:19:48.390 2085.234 - 2100.129: 95.4533% ( 435) 00:19:48.390 2100.129 - 2115.023: 95.9962% ( 381) 00:19:48.390 2115.023 - 2129.918: 96.4621% ( 327) 00:19:48.390 2129.918 - 2144.812: 96.8596% ( 279) 00:19:48.390 2144.812 - 2159.707: 97.1874% ( 230) 00:19:48.390 2159.707 - 2174.601: 97.4866% ( 210) 00:19:48.390 2174.601 - 2189.496: 97.7345% ( 174) 00:19:48.390 2189.496 - 2204.390: 97.9596% ( 158) 00:19:48.390 2204.390 - 2219.285: 98.1434% ( 129) 00:19:48.390 2219.285 - 2234.179: 98.2745% ( 92) 00:19:48.390 2234.179 - 2249.074: 98.3771% ( 72) 00:19:48.390 2249.074 - 2263.968: 98.4555% ( 55) 00:19:48.390 2263.968 - 2278.863: 98.5723% ( 82) 00:19:48.390 2278.863 - 2293.757: 98.6321% ( 42) 00:19:48.390 2293.757 - 2308.652: 98.6592% ( 19) 00:19:48.390 2308.652 - 2323.547: 98.6906% ( 22) 00:19:48.390 2323.547 - 2338.441: 98.7575% ( 47) 00:19:48.390 2338.441 - 2353.336: 98.8060% ( 34) 00:19:48.390 2353.336 - 2368.230: 98.8444% ( 27) 00:19:48.390 2368.230 - 2383.125: 98.8943% ( 35) 00:19:48.390 2383.125 - 2398.019: 98.9513% ( 40) 00:19:48.390 2398.019 - 2412.914: 98.9741% ( 16) 00:19:48.390 2412.914 - 2427.808: 98.9884% ( 10) 00:19:48.390 2427.808 - 2442.703: 99.0040% ( 11) 00:19:48.390 2442.703 - 2457.597: 99.0340% ( 21) 00:19:48.390 2457.597 - 2472.492: 99.0596% ( 18) 00:19:48.390 2472.492 - 2487.386: 99.0881% ( 20) 00:19:48.390 2487.386 - 2502.281: 99.1194% ( 22) 00:19:48.390 2502.281 - 2517.175: 99.1294% ( 7) 00:19:48.390 2517.175 - 2532.070: 99.1394% ( 7) 00:19:48.390 2532.070 - 2546.964: 99.1750% ( 25) 00:19:48.390 2546.964 - 2561.859: 99.2220% ( 33) 00:19:48.390 2561.859 - 2576.754: 99.2363% ( 10) 00:19:48.390 2576.754 - 2591.648: 99.2534% ( 12) 00:19:48.390 2591.648 - 2606.543: 99.3118% ( 41) 00:19:48.390 2606.543 - 2621.437: 99.3588% ( 33) 00:19:48.390 2621.437 - 2636.332: 99.3959% ( 26) 00:19:48.390 2636.332 - 2651.226: 99.4244% ( 20) 00:19:48.390 2651.226 - 2666.121: 99.4557% ( 22) 00:19:48.390 2666.121 - 2681.015: 99.4942% ( 27) 00:19:48.390 2681.015 - 2695.910: 99.5184% ( 17) 00:19:48.390 2695.910 - 2710.804: 99.5512% ( 23) 00:19:48.390 2710.804 - 2725.699: 99.5797% ( 20) 00:19:48.390 2725.699 - 2740.593: 99.6096% ( 21) 00:19:48.390 2740.593 - 2755.488: 99.6110% ( 1) 00:19:48.390 2770.382 - 2785.277: 99.6167% ( 4) 00:19:48.390 2785.277 - 2800.171: 99.6181% ( 1) 00:19:48.390 2800.171 - 2815.066: 99.6224% ( 3) 00:19:48.390 2815.066 - 2829.961: 99.6338% ( 8) 00:19:48.390 2829.961 - 2844.855: 99.6481% ( 10) 00:19:48.390 2844.855 - 2859.750: 99.6580% ( 7) 00:19:48.390 2859.750 - 2874.644: 99.6723% ( 10) 00:19:48.390 2874.644 - 2889.539: 99.6780% ( 4) 00:19:48.390 2889.539 - 2904.433: 99.6823% ( 3) 00:19:48.390 2919.328 - 2934.222: 99.6837% ( 1) 00:19:48.390 2934.222 - 2949.117: 99.6880% ( 3) 00:19:48.390 2949.117 - 2964.011: 99.6894% ( 1) 00:19:48.390 3008.695 - 3023.589: 99.6908% ( 1) 00:19:48.390 3023.589 - 3038.484: 99.6937% ( 2) 00:19:48.390 3038.484 - 3053.378: 99.6994% ( 4) 00:19:48.390 3053.378 - 3068.273: 99.7051% ( 4) 00:19:48.390 3068.273 - 3083.168: 99.7150% ( 7) 00:19:48.390 3083.168 - 3098.062: 99.7364% ( 15) 00:19:48.390 3098.062 - 3112.957: 99.7407% ( 3) 00:19:48.390 3112.957 - 3127.851: 99.7450% ( 3) 00:19:48.390 3127.851 - 3142.746: 99.7492% ( 3) 00:19:48.390 3142.746 - 3157.640: 99.7549% ( 4) 00:19:48.390 3157.640 - 3172.535: 99.7692% ( 10) 00:19:48.390 3172.535 - 3187.429: 99.7806% ( 8) 00:19:48.390 3187.429 - 3202.324: 99.8062% ( 18) 00:19:48.390 3202.324 - 3217.218: 99.8148% ( 6) 00:19:48.390 3217.218 - 3232.113: 99.8190% ( 3) 00:19:48.390 3232.113 - 3247.007: 99.8247% ( 4) 00:19:48.390 3247.007 - 3261.902: 99.8290% ( 3) 00:19:48.390 3276.796 - 3291.691: 99.8333% ( 3) 00:19:48.390 3291.691 - 3306.585: 99.8376% ( 3) 00:19:48.390 3306.585 - 3321.480: 99.8490% ( 8) 00:19:48.390 3321.480 - 3336.375: 99.8632% ( 10) 00:19:48.390 3336.375 - 3351.269: 99.8718% ( 6) 00:19:48.390 3351.269 - 3366.164: 99.8760% ( 3) 00:19:48.390 3366.164 - 3381.058: 99.8903% ( 10) 00:19:48.390 3381.058 - 3395.953: 99.9045% ( 10) 00:19:48.390 3395.953 - 3410.847: 99.9174% ( 9) 00:19:48.390 3410.847 - 3425.742: 99.9216% ( 3) 00:19:48.390 3425.742 - 3440.636: 99.9273% ( 4) 00:19:48.390 3440.636 - 3455.531: 99.9302% ( 2) 00:19:48.390 3455.531 - 3470.425: 99.9387% ( 6) 00:19:48.390 3470.425 - 3485.320: 99.9402% ( 1) 00:19:48.390 3500.214 - 3515.109: 99.9416% ( 1) 00:19:48.390 3515.109 - 3530.003: 99.9587% ( 12) 00:19:48.390 3544.898 - 3559.792: 99.9601% ( 1) 00:19:48.390 3634.265 - 3649.160: 99.9644% ( 3) 00:19:48.390 3649.160 - 3664.054: 99.9715% ( 5) 00:19:48.390 4081.101 - 4110.890: 99.9729% ( 1) 00:19:48.390 4140.679 - 4170.468: 99.9829% ( 7) 00:19:48.390 4200.257 - 4230.046: 99.9843% ( 1) 00:19:48.390 4230.046 - 4259.835: 99.9858% ( 1) 00:19:48.390 4259.835 - 4289.624: 99.9986% ( 9) 00:19:48.390 4915.195 - 4944.984: 100.0000% ( 1) 00:19:48.390 00:19:48.959 18:33:41 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:48.959 00:19:48.959 real 0m3.799s 00:19:48.959 user 0m2.494s 00:19:48.959 sys 0m1.301s 00:19:48.959 18:33:41 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:48.959 18:33:41 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:48.959 ************************************ 00:19:48.959 END TEST nvme_perf 00:19:48.959 ************************************ 00:19:48.959 18:33:41 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:48.959 18:33:41 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:48.959 18:33:41 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:48.959 18:33:41 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.959 18:33:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:48.959 ************************************ 00:19:48.959 START TEST nvme_hello_world 00:19:48.959 ************************************ 00:19:48.959 18:33:41 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:49.895 EAL: TSC is not safe to use in SMP mode 00:19:49.895 EAL: TSC is not invariant 00:19:49.895 [2024-07-15 18:33:41.966105] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:49.895 Initializing NVMe Controllers 00:19:49.895 Attaching to 0000:00:10.0 00:19:49.895 Attached to 0000:00:10.0 00:19:49.895 Namespace ID: 1 size: 5GB 00:19:49.895 Initialization complete. 00:19:49.895 INFO: using host memory buffer for IO 00:19:49.895 Hello world! 00:19:49.895 00:19:49.895 real 0m0.674s 00:19:49.895 user 0m0.012s 00:19:49.895 sys 0m0.666s 00:19:49.895 18:33:42 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:49.895 ************************************ 00:19:49.895 END TEST nvme_hello_world 00:19:49.895 ************************************ 00:19:49.895 18:33:42 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:49.895 18:33:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:49.895 18:33:42 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:49.895 18:33:42 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:49.895 18:33:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.895 18:33:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:49.895 ************************************ 00:19:49.895 START TEST nvme_sgl 00:19:49.895 ************************************ 00:19:49.895 18:33:42 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:50.462 EAL: TSC is not safe to use in SMP mode 00:19:50.463 EAL: TSC is not invariant 00:19:50.463 [2024-07-15 18:33:42.687494] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:50.463 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:50.463 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:50.463 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:50.463 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:50.463 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:50.463 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:50.463 NVMe Readv/Writev Request test 00:19:50.463 Attaching to 0000:00:10.0 00:19:50.463 Attached to 0000:00:10.0 00:19:50.463 0000:00:10.0: build_io_request_2 test passed 00:19:50.463 0000:00:10.0: build_io_request_4 test passed 00:19:50.463 0000:00:10.0: build_io_request_5 test passed 00:19:50.463 0000:00:10.0: build_io_request_6 test passed 00:19:50.463 0000:00:10.0: build_io_request_7 test passed 00:19:50.463 0000:00:10.0: build_io_request_10 test passed 00:19:50.463 Cleaning up... 00:19:50.463 00:19:50.463 real 0m0.680s 00:19:50.463 user 0m0.024s 00:19:50.463 sys 0m0.656s 00:19:50.463 18:33:42 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:50.463 18:33:42 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:50.463 ************************************ 00:19:50.463 END TEST nvme_sgl 00:19:50.463 ************************************ 00:19:50.463 18:33:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:50.463 18:33:42 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:50.463 18:33:42 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:50.463 18:33:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.463 18:33:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:50.463 ************************************ 00:19:50.463 START TEST nvme_e2edp 00:19:50.463 ************************************ 00:19:50.463 18:33:42 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:51.030 EAL: TSC is not safe to use in SMP mode 00:19:51.030 EAL: TSC is not invariant 00:19:51.030 [2024-07-15 18:33:43.426616] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:51.288 NVMe Write/Read with End-to-End data protection test 00:19:51.288 Attaching to 0000:00:10.0 00:19:51.288 Attached to 0000:00:10.0 00:19:51.288 Cleaning up... 00:19:51.288 00:19:51.288 real 0m0.676s 00:19:51.288 user 0m0.009s 00:19:51.288 sys 0m0.670s 00:19:51.288 18:33:43 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.288 18:33:43 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:51.288 ************************************ 00:19:51.288 END TEST nvme_e2edp 00:19:51.288 ************************************ 00:19:51.288 18:33:43 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:51.288 18:33:43 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:51.288 18:33:43 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:51.288 18:33:43 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.288 18:33:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:51.288 ************************************ 00:19:51.288 START TEST nvme_reserve 00:19:51.288 ************************************ 00:19:51.288 18:33:43 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:51.855 EAL: TSC is not safe to use in SMP mode 00:19:51.855 EAL: TSC is not invariant 00:19:51.855 [2024-07-15 18:33:44.121365] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:51.855 ===================================================== 00:19:51.855 NVMe Controller at PCI bus 0, device 16, function 0 00:19:51.855 ===================================================== 00:19:51.855 Reservations: Not Supported 00:19:51.855 Reservation test passed 00:19:51.855 00:19:51.855 real 0m0.664s 00:19:51.855 user 0m0.000s 00:19:51.855 sys 0m0.664s 00:19:51.855 18:33:44 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.855 18:33:44 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:51.855 ************************************ 00:19:51.855 END TEST nvme_reserve 00:19:51.855 ************************************ 00:19:51.855 18:33:44 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:51.855 18:33:44 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:51.855 18:33:44 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:51.855 18:33:44 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.855 18:33:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:51.855 ************************************ 00:19:51.855 START TEST nvme_err_injection 00:19:51.855 ************************************ 00:19:51.855 18:33:44 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:52.790 EAL: TSC is not safe to use in SMP mode 00:19:52.790 EAL: TSC is not invariant 00:19:52.790 [2024-07-15 18:33:44.853267] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:52.790 NVMe Error Injection test 00:19:52.790 Attaching to 0000:00:10.0 00:19:52.790 Attached to 0000:00:10.0 00:19:52.790 0000:00:10.0: get features failed as expected 00:19:52.790 0000:00:10.0: get features successfully as expected 00:19:52.790 0000:00:10.0: read failed as expected 00:19:52.790 0000:00:10.0: read successfully as expected 00:19:52.790 Cleaning up... 00:19:52.790 00:19:52.790 real 0m0.691s 00:19:52.790 user 0m0.007s 00:19:52.790 sys 0m0.684s 00:19:52.790 18:33:44 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.790 18:33:44 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:52.790 ************************************ 00:19:52.790 END TEST nvme_err_injection 00:19:52.790 ************************************ 00:19:52.790 18:33:44 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:52.790 18:33:44 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:52.790 18:33:44 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:19:52.790 18:33:44 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.790 18:33:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:52.790 ************************************ 00:19:52.790 START TEST nvme_overhead 00:19:52.790 ************************************ 00:19:52.790 18:33:44 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:53.357 EAL: TSC is not safe to use in SMP mode 00:19:53.357 EAL: TSC is not invariant 00:19:53.357 [2024-07-15 18:33:45.599876] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:54.366 Initializing NVMe Controllers 00:19:54.366 Attaching to 0000:00:10.0 00:19:54.366 Attached to 0000:00:10.0 00:19:54.366 Initialization complete. Launching workers. 00:19:54.366 submit (in ns) avg, min, max = 10045.2, 7128.6, 117561.2 00:19:54.366 complete (in ns) avg, min, max = 7165.7, 6080.0, 142077.1 00:19:54.366 00:19:54.366 Submit histogram 00:19:54.366 ================ 00:19:54.366 Range in us Cumulative Count 00:19:54.366 7.127 - 7.156: 0.0095% ( 1) 00:19:54.366 7.331 - 7.360: 0.0190% ( 1) 00:19:54.366 7.360 - 7.389: 0.0285% ( 1) 00:19:54.366 8.029 - 8.087: 0.0379% ( 1) 00:19:54.366 8.087 - 8.145: 0.0854% ( 5) 00:19:54.366 8.145 - 8.204: 1.2520% ( 123) 00:19:54.366 8.204 - 8.262: 2.4092% ( 122) 00:19:54.366 8.262 - 8.320: 2.7981% ( 41) 00:19:54.366 8.320 - 8.378: 2.9972% ( 21) 00:19:54.366 8.378 - 8.436: 3.0352% ( 4) 00:19:54.366 8.436 - 8.495: 3.0826% ( 5) 00:19:54.366 8.495 - 8.553: 3.1016% ( 2) 00:19:54.366 8.553 - 8.611: 3.1111% ( 1) 00:19:54.366 8.669 - 8.727: 3.1395% ( 3) 00:19:54.366 8.727 - 8.785: 3.1585% ( 2) 00:19:54.366 8.785 - 8.844: 3.3197% ( 17) 00:19:54.366 8.844 - 8.902: 4.1165% ( 84) 00:19:54.366 8.902 - 8.960: 6.6015% ( 262) 00:19:54.366 8.960 - 9.018: 12.0649% ( 576) 00:19:54.366 9.018 - 9.076: 22.9726% ( 1150) 00:19:54.366 9.076 - 9.135: 38.2718% ( 1613) 00:19:54.366 9.135 - 9.193: 52.8692% ( 1539) 00:19:54.366 9.193 - 9.251: 64.8297% ( 1261) 00:19:54.366 9.251 - 9.309: 72.2944% ( 787) 00:19:54.366 9.309 - 9.367: 76.4868% ( 442) 00:19:54.366 9.367 - 9.425: 78.8485% ( 249) 00:19:54.366 9.425 - 9.484: 80.5179% ( 176) 00:19:54.366 9.484 - 9.542: 81.9880% ( 155) 00:19:54.366 9.542 - 9.600: 82.9555% ( 102) 00:19:54.366 9.600 - 9.658: 83.5056% ( 58) 00:19:54.366 9.658 - 9.716: 83.8945% ( 41) 00:19:54.366 9.716 - 9.775: 84.1411% ( 26) 00:19:54.366 9.775 - 9.833: 84.2550% ( 12) 00:19:54.366 9.833 - 9.891: 84.3308% ( 8) 00:19:54.366 9.891 - 9.949: 84.4257% ( 10) 00:19:54.366 9.949 - 10.007: 84.6913% ( 28) 00:19:54.366 10.007 - 10.065: 85.4216% ( 77) 00:19:54.366 10.065 - 10.124: 85.7726% ( 37) 00:19:54.366 10.124 - 10.182: 85.9528% ( 19) 00:19:54.366 10.182 - 10.240: 86.0571% ( 11) 00:19:54.366 10.240 - 10.298: 86.1425% ( 9) 00:19:54.366 10.298 - 10.356: 86.2278% ( 9) 00:19:54.366 10.356 - 10.415: 86.3037% ( 8) 00:19:54.366 10.415 - 10.473: 86.3606% ( 6) 00:19:54.366 10.473 - 10.531: 86.3796% ( 2) 00:19:54.366 10.531 - 10.589: 86.3986% ( 2) 00:19:54.366 10.589 - 10.647: 86.4175% ( 2) 00:19:54.366 10.705 - 10.764: 86.4460% ( 3) 00:19:54.366 10.764 - 10.822: 86.4650% ( 2) 00:19:54.366 10.822 - 10.880: 86.5219% ( 6) 00:19:54.366 10.880 - 10.938: 86.8064% ( 30) 00:19:54.366 10.938 - 10.996: 87.3471% ( 57) 00:19:54.366 10.996 - 11.055: 87.8403% ( 52) 00:19:54.366 11.055 - 11.113: 88.1059% ( 28) 00:19:54.366 11.113 - 11.171: 88.2576% ( 16) 00:19:54.367 11.171 - 11.229: 88.3904% ( 14) 00:19:54.367 11.229 - 11.287: 88.3999% ( 1) 00:19:54.367 11.287 - 11.345: 88.4189% ( 2) 00:19:54.367 11.345 - 11.404: 88.4378% ( 2) 00:19:54.367 11.404 - 11.462: 88.4853% ( 5) 00:19:54.367 11.462 - 11.520: 88.5137% ( 3) 00:19:54.367 11.520 - 11.578: 88.5327% ( 2) 00:19:54.367 11.578 - 11.636: 88.5801% ( 5) 00:19:54.367 11.636 - 11.695: 88.6086% ( 3) 00:19:54.367 11.695 - 11.753: 88.6275% ( 2) 00:19:54.367 11.753 - 11.811: 88.6465% ( 2) 00:19:54.367 11.811 - 11.869: 88.6750% ( 3) 00:19:54.367 11.927 - 11.985: 88.6939% ( 2) 00:19:54.367 11.985 - 12.044: 88.7508% ( 6) 00:19:54.367 12.044 - 12.102: 88.7983% ( 5) 00:19:54.367 12.102 - 12.160: 88.8077% ( 1) 00:19:54.367 12.160 - 12.218: 88.8362% ( 3) 00:19:54.367 12.218 - 12.276: 88.8836% ( 5) 00:19:54.367 12.276 - 12.335: 88.9121% ( 3) 00:19:54.367 12.335 - 12.393: 88.9216% ( 1) 00:19:54.367 12.393 - 12.451: 88.9500% ( 3) 00:19:54.367 12.451 - 12.509: 89.0069% ( 6) 00:19:54.367 12.509 - 12.567: 89.0164% ( 1) 00:19:54.367 12.567 - 12.625: 89.0449% ( 3) 00:19:54.367 12.625 - 12.684: 89.0543% ( 1) 00:19:54.367 12.684 - 12.742: 89.0638% ( 1) 00:19:54.367 12.742 - 12.800: 89.0733% ( 1) 00:19:54.367 12.800 - 12.858: 89.0923% ( 2) 00:19:54.367 12.858 - 12.916: 89.1018% ( 1) 00:19:54.367 12.916 - 12.975: 89.1302% ( 3) 00:19:54.367 12.975 - 13.033: 89.1492% ( 2) 00:19:54.367 13.033 - 13.091: 89.1871% ( 4) 00:19:54.367 13.091 - 13.149: 89.2156% ( 3) 00:19:54.367 13.149 - 13.207: 89.2630% ( 5) 00:19:54.367 13.207 - 13.265: 89.2725% ( 1) 00:19:54.367 13.265 - 13.324: 89.2915% ( 2) 00:19:54.367 13.324 - 13.382: 89.3104% ( 2) 00:19:54.367 13.382 - 13.440: 89.3199% ( 1) 00:19:54.367 13.440 - 13.498: 89.3294% ( 1) 00:19:54.367 13.498 - 13.556: 89.3863% ( 6) 00:19:54.367 13.556 - 13.615: 89.4243% ( 4) 00:19:54.367 13.615 - 13.673: 89.4907% ( 7) 00:19:54.367 13.673 - 13.731: 89.5476% ( 6) 00:19:54.367 13.731 - 13.789: 89.6519% ( 11) 00:19:54.367 13.789 - 13.847: 89.7088% ( 6) 00:19:54.367 13.847 - 13.905: 89.7278% ( 2) 00:19:54.367 13.905 - 13.964: 89.7942% ( 7) 00:19:54.367 13.964 - 14.022: 89.9554% ( 17) 00:19:54.367 14.022 - 14.080: 90.0787% ( 13) 00:19:54.367 14.080 - 14.138: 90.1641% ( 9) 00:19:54.367 14.138 - 14.196: 90.2684% ( 11) 00:19:54.367 14.196 - 14.255: 90.4107% ( 15) 00:19:54.367 14.255 - 14.313: 90.5909% ( 19) 00:19:54.367 14.313 - 14.371: 90.8091% ( 23) 00:19:54.367 14.371 - 14.429: 91.0177% ( 22) 00:19:54.367 14.429 - 14.487: 91.2074% ( 20) 00:19:54.367 14.487 - 14.545: 91.4256% ( 23) 00:19:54.367 14.545 - 14.604: 91.6058% ( 19) 00:19:54.367 14.604 - 14.662: 91.8050% ( 21) 00:19:54.367 14.662 - 14.720: 91.9757% ( 18) 00:19:54.367 14.720 - 14.778: 92.0990% ( 13) 00:19:54.367 14.778 - 14.836: 92.1939% ( 10) 00:19:54.367 14.836 - 14.895: 92.4120% ( 23) 00:19:54.367 14.895 - 15.011: 92.7630% ( 37) 00:19:54.367 15.011 - 15.127: 93.1424% ( 40) 00:19:54.367 15.127 - 15.244: 93.5407% ( 42) 00:19:54.367 15.244 - 15.360: 93.9391% ( 42) 00:19:54.367 15.360 - 15.476: 94.2426% ( 32) 00:19:54.367 15.476 - 15.593: 94.4892% ( 26) 00:19:54.367 15.593 - 15.709: 94.7738% ( 30) 00:19:54.367 15.709 - 15.825: 94.9635% ( 20) 00:19:54.367 15.825 - 15.942: 95.2480% ( 30) 00:19:54.367 15.942 - 16.058: 95.4282% ( 19) 00:19:54.367 16.058 - 16.175: 95.6843% ( 27) 00:19:54.367 16.175 - 16.291: 95.9120% ( 24) 00:19:54.367 16.291 - 16.407: 96.0922% ( 19) 00:19:54.367 16.407 - 16.524: 96.2534% ( 17) 00:19:54.367 16.524 - 16.640: 96.3673% ( 12) 00:19:54.367 16.640 - 16.756: 96.5190% ( 16) 00:19:54.367 16.756 - 16.873: 96.6803% ( 17) 00:19:54.367 16.873 - 16.989: 96.7277% ( 5) 00:19:54.367 16.989 - 17.105: 96.8415% ( 12) 00:19:54.367 17.105 - 17.222: 97.0028% ( 17) 00:19:54.367 17.222 - 17.338: 97.1545% ( 16) 00:19:54.367 17.338 - 17.455: 97.3063% ( 16) 00:19:54.367 17.455 - 17.571: 97.4485% ( 15) 00:19:54.367 17.571 - 17.687: 97.6382% ( 20) 00:19:54.367 17.687 - 17.804: 97.7900% ( 16) 00:19:54.367 17.804 - 17.920: 97.8754% ( 9) 00:19:54.367 17.920 - 18.036: 98.0556% ( 19) 00:19:54.367 18.036 - 18.153: 98.1884% ( 14) 00:19:54.367 18.153 - 18.269: 98.2737% ( 9) 00:19:54.367 18.269 - 18.385: 98.3781% ( 11) 00:19:54.367 18.385 - 18.502: 98.4445% ( 7) 00:19:54.367 18.502 - 18.618: 98.5678% ( 13) 00:19:54.367 18.618 - 18.735: 98.6436% ( 8) 00:19:54.367 18.735 - 18.851: 98.6816% ( 4) 00:19:54.367 18.851 - 18.967: 98.7670% ( 9) 00:19:54.367 18.967 - 19.084: 98.8049% ( 4) 00:19:54.367 19.084 - 19.200: 98.8713% ( 7) 00:19:54.367 19.200 - 19.316: 98.9187% ( 5) 00:19:54.367 19.316 - 19.433: 98.9567% ( 4) 00:19:54.367 19.433 - 19.549: 98.9756% ( 2) 00:19:54.367 19.549 - 19.665: 98.9851% ( 1) 00:19:54.367 19.782 - 19.898: 99.0230% ( 4) 00:19:54.367 20.131 - 20.247: 99.0705% ( 5) 00:19:54.367 20.247 - 20.364: 99.1274% ( 6) 00:19:54.367 20.480 - 20.596: 99.1653% ( 4) 00:19:54.367 20.596 - 20.713: 99.2033% ( 4) 00:19:54.367 20.713 - 20.829: 99.2127% ( 1) 00:19:54.367 20.829 - 20.945: 99.2222% ( 1) 00:19:54.367 20.945 - 21.062: 99.2317% ( 1) 00:19:54.367 21.062 - 21.178: 99.2412% ( 1) 00:19:54.367 21.178 - 21.295: 99.2602% ( 2) 00:19:54.367 21.295 - 21.411: 99.2791% ( 2) 00:19:54.367 21.411 - 21.527: 99.2886% ( 1) 00:19:54.367 21.527 - 21.644: 99.2981% ( 1) 00:19:54.367 21.993 - 22.109: 99.3171% ( 2) 00:19:54.367 22.225 - 22.342: 99.3266% ( 1) 00:19:54.367 22.342 - 22.458: 99.3361% ( 1) 00:19:54.367 22.807 - 22.924: 99.3455% ( 1) 00:19:54.367 23.040 - 23.156: 99.3645% ( 2) 00:19:54.367 23.156 - 23.273: 99.3740% ( 1) 00:19:54.367 23.389 - 23.505: 99.3835% ( 1) 00:19:54.367 23.505 - 23.622: 99.3930% ( 1) 00:19:54.367 23.622 - 23.738: 99.4024% ( 1) 00:19:54.367 23.855 - 23.971: 99.4214% ( 2) 00:19:54.367 23.971 - 24.087: 99.4594% ( 4) 00:19:54.367 24.436 - 24.553: 99.4973% ( 4) 00:19:54.367 24.669 - 24.785: 99.5068% ( 1) 00:19:54.367 24.785 - 24.902: 99.5258% ( 2) 00:19:54.367 24.902 - 25.018: 99.5447% ( 2) 00:19:54.367 25.018 - 25.135: 99.5637% ( 2) 00:19:54.367 25.251 - 25.367: 99.5827% ( 2) 00:19:54.367 25.367 - 25.484: 99.5921% ( 1) 00:19:54.367 25.484 - 25.600: 99.6206% ( 3) 00:19:54.367 25.716 - 25.833: 99.6491% ( 3) 00:19:54.367 25.949 - 26.065: 99.6585% ( 1) 00:19:54.367 26.065 - 26.182: 99.6680% ( 1) 00:19:54.367 26.182 - 26.298: 99.7155% ( 5) 00:19:54.367 26.298 - 26.415: 99.7249% ( 1) 00:19:54.367 26.531 - 26.647: 99.7439% ( 2) 00:19:54.367 26.647 - 26.764: 99.7629% ( 2) 00:19:54.367 26.880 - 26.996: 99.7724% ( 1) 00:19:54.367 26.996 - 27.113: 99.7818% ( 1) 00:19:54.367 27.113 - 27.229: 99.7913% ( 1) 00:19:54.367 27.462 - 27.578: 99.8008% ( 1) 00:19:54.367 27.578 - 27.695: 99.8198% ( 2) 00:19:54.367 27.811 - 27.927: 99.8293% ( 1) 00:19:54.367 27.927 - 28.044: 99.8388% ( 1) 00:19:54.367 28.276 - 28.393: 99.8482% ( 1) 00:19:54.367 28.509 - 28.625: 99.8577% ( 1) 00:19:54.368 28.975 - 29.091: 99.8672% ( 1) 00:19:54.368 29.207 - 29.324: 99.8767% ( 1) 00:19:54.368 29.673 - 29.789: 99.8862% ( 1) 00:19:54.368 30.022 - 30.255: 99.8957% ( 1) 00:19:54.368 30.255 - 30.487: 99.9052% ( 1) 00:19:54.368 31.884 - 32.116: 99.9241% ( 2) 00:19:54.368 32.116 - 32.349: 99.9336% ( 1) 00:19:54.368 36.538 - 36.771: 99.9431% ( 1) 00:19:54.368 36.771 - 37.004: 99.9526% ( 1) 00:19:54.368 38.633 - 38.865: 99.9621% ( 1) 00:19:54.368 40.262 - 40.495: 99.9715% ( 1) 00:19:54.368 54.458 - 54.691: 99.9810% ( 1) 00:19:54.368 80.058 - 80.524: 99.9905% ( 1) 00:19:54.368 117.294 - 117.760: 100.0000% ( 1) 00:19:54.368 00:19:54.368 Complete histogram 00:19:54.368 ================== 00:19:54.368 Range in us Cumulative Count 00:19:54.368 6.080 - 6.109: 0.3984% ( 42) 00:19:54.368 6.109 - 6.138: 2.4092% ( 212) 00:19:54.368 6.138 - 6.167: 9.5798% ( 756) 00:19:54.368 6.167 - 6.196: 21.8723% ( 1296) 00:19:54.368 6.196 - 6.225: 33.1784% ( 1192) 00:19:54.368 6.225 - 6.255: 42.0374% ( 934) 00:19:54.368 6.255 - 6.284: 50.9627% ( 941) 00:19:54.368 6.284 - 6.313: 59.0913% ( 857) 00:19:54.368 6.313 - 6.342: 65.7213% ( 699) 00:19:54.368 6.342 - 6.371: 69.7714% ( 427) 00:19:54.368 6.371 - 6.400: 72.7212% ( 311) 00:19:54.368 6.400 - 6.429: 74.9502% ( 235) 00:19:54.368 6.429 - 6.458: 76.3824% ( 151) 00:19:54.368 6.458 - 6.487: 77.3689% ( 104) 00:19:54.368 6.487 - 6.516: 78.1751% ( 85) 00:19:54.368 6.516 - 6.545: 78.7632% ( 62) 00:19:54.368 6.545 - 6.575: 79.1900% ( 45) 00:19:54.368 6.575 - 6.604: 79.4745% ( 30) 00:19:54.368 6.604 - 6.633: 79.8444% ( 39) 00:19:54.368 6.633 - 6.662: 80.4325% ( 62) 00:19:54.368 6.662 - 6.691: 81.2862% ( 90) 00:19:54.368 6.691 - 6.720: 82.1303% ( 89) 00:19:54.368 6.720 - 6.749: 82.6710% ( 57) 00:19:54.368 6.749 - 6.778: 82.9745% ( 32) 00:19:54.368 6.778 - 6.807: 83.2590% ( 30) 00:19:54.368 6.807 - 6.836: 83.4677% ( 22) 00:19:54.368 6.836 - 6.865: 83.6384% ( 18) 00:19:54.368 6.865 - 6.895: 83.7333% ( 10) 00:19:54.368 6.895 - 6.924: 83.8281% ( 10) 00:19:54.368 6.924 - 6.953: 83.8850% ( 6) 00:19:54.368 6.953 - 6.982: 84.0273% ( 15) 00:19:54.368 6.982 - 7.011: 84.0937% ( 7) 00:19:54.368 7.011 - 7.040: 84.1317% ( 4) 00:19:54.368 7.040 - 7.069: 84.1886% ( 6) 00:19:54.368 7.069 - 7.098: 84.2170% ( 3) 00:19:54.368 7.098 - 7.127: 84.2644% ( 5) 00:19:54.368 7.127 - 7.156: 84.3119% ( 5) 00:19:54.368 7.156 - 7.185: 84.3403% ( 3) 00:19:54.368 7.185 - 7.215: 84.3783% ( 4) 00:19:54.368 7.215 - 7.244: 84.4067% ( 3) 00:19:54.368 7.244 - 7.273: 84.4541% ( 5) 00:19:54.368 7.302 - 7.331: 84.4826% ( 3) 00:19:54.368 7.331 - 7.360: 84.5016% ( 2) 00:19:54.368 7.360 - 7.389: 84.5300% ( 3) 00:19:54.368 7.389 - 7.418: 84.5774% ( 5) 00:19:54.368 7.418 - 7.447: 84.6154% ( 4) 00:19:54.368 7.447 - 7.505: 84.6818% ( 7) 00:19:54.368 7.505 - 7.564: 84.7482% ( 7) 00:19:54.368 7.564 - 7.622: 84.7671% ( 2) 00:19:54.368 7.622 - 7.680: 85.0043% ( 25) 00:19:54.368 7.680 - 7.738: 85.8010% ( 84) 00:19:54.368 7.738 - 7.796: 86.5598% ( 80) 00:19:54.368 7.796 - 7.855: 86.7969% ( 25) 00:19:54.368 7.855 - 7.913: 86.9487% ( 16) 00:19:54.368 7.913 - 7.971: 86.9961% ( 5) 00:19:54.368 7.971 - 8.029: 87.0341% ( 4) 00:19:54.368 8.087 - 8.145: 87.0720% ( 4) 00:19:54.368 8.145 - 8.204: 87.3471% ( 29) 00:19:54.368 8.204 - 8.262: 88.0395% ( 73) 00:19:54.368 8.262 - 8.320: 88.4094% ( 39) 00:19:54.368 8.320 - 8.378: 88.6180% ( 22) 00:19:54.368 8.378 - 8.436: 88.7413% ( 13) 00:19:54.368 8.436 - 8.495: 88.7888% ( 5) 00:19:54.368 8.495 - 8.553: 88.8172% ( 3) 00:19:54.368 8.553 - 8.611: 88.8362% ( 2) 00:19:54.368 8.611 - 8.669: 88.8646% ( 3) 00:19:54.368 8.669 - 8.727: 88.8836% ( 2) 00:19:54.368 8.727 - 8.785: 88.9121% ( 3) 00:19:54.368 8.785 - 8.844: 88.9310% ( 2) 00:19:54.368 8.844 - 8.902: 88.9690% ( 4) 00:19:54.368 8.902 - 8.960: 89.0069% ( 4) 00:19:54.368 8.960 - 9.018: 89.0354% ( 3) 00:19:54.368 9.018 - 9.076: 89.0543% ( 2) 00:19:54.368 9.076 - 9.135: 89.0638% ( 1) 00:19:54.368 9.135 - 9.193: 89.1018% ( 4) 00:19:54.368 9.193 - 9.251: 89.1302% ( 3) 00:19:54.368 9.251 - 9.309: 89.1397% ( 1) 00:19:54.368 9.309 - 9.367: 89.1682% ( 3) 00:19:54.368 9.367 - 9.425: 89.1871% ( 2) 00:19:54.368 9.425 - 9.484: 89.2251% ( 4) 00:19:54.368 9.484 - 9.542: 89.2535% ( 3) 00:19:54.368 9.600 - 9.658: 89.2915% ( 4) 00:19:54.368 9.658 - 9.716: 89.3104% ( 2) 00:19:54.368 9.716 - 9.775: 89.3294% ( 2) 00:19:54.368 9.775 - 9.833: 89.3484% ( 2) 00:19:54.368 9.833 - 9.891: 89.3579% ( 1) 00:19:54.368 9.891 - 9.949: 89.3768% ( 2) 00:19:54.368 9.949 - 10.007: 89.3863% ( 1) 00:19:54.368 10.065 - 10.124: 89.4053% ( 2) 00:19:54.368 10.124 - 10.182: 89.4432% ( 4) 00:19:54.368 10.240 - 10.298: 89.4907% ( 5) 00:19:54.368 10.298 - 10.356: 89.5096% ( 2) 00:19:54.368 10.356 - 10.415: 89.5191% ( 1) 00:19:54.368 10.415 - 10.473: 89.5286% ( 1) 00:19:54.368 10.473 - 10.531: 89.6234% ( 10) 00:19:54.368 10.531 - 10.589: 89.6993% ( 8) 00:19:54.368 10.589 - 10.647: 89.7657% ( 7) 00:19:54.368 10.647 - 10.705: 89.8321% ( 7) 00:19:54.368 10.705 - 10.764: 89.9839% ( 16) 00:19:54.368 10.764 - 10.822: 90.1072% ( 13) 00:19:54.368 10.822 - 10.880: 90.2115% ( 11) 00:19:54.368 10.880 - 10.938: 90.3348% ( 13) 00:19:54.368 10.938 - 10.996: 90.3917% ( 6) 00:19:54.368 10.996 - 11.055: 90.4581% ( 7) 00:19:54.368 11.055 - 11.113: 90.5719% ( 12) 00:19:54.368 11.113 - 11.171: 90.6858% ( 12) 00:19:54.368 11.171 - 11.229: 90.8375% ( 16) 00:19:54.368 11.229 - 11.287: 90.9513% ( 12) 00:19:54.368 11.287 - 11.345: 91.0936% ( 15) 00:19:54.368 11.345 - 11.404: 91.1600% ( 7) 00:19:54.368 11.404 - 11.462: 91.2169% ( 6) 00:19:54.368 11.462 - 11.520: 91.2549% ( 4) 00:19:54.368 11.520 - 11.578: 91.2928% ( 4) 00:19:54.368 11.578 - 11.636: 91.3877% ( 10) 00:19:54.368 11.636 - 11.695: 91.4351% ( 5) 00:19:54.368 11.695 - 11.753: 91.5299% ( 10) 00:19:54.368 11.753 - 11.811: 91.5963% ( 7) 00:19:54.368 11.811 - 11.869: 91.6627% ( 7) 00:19:54.368 11.869 - 11.927: 91.7955% ( 14) 00:19:54.368 11.927 - 11.985: 91.8904% ( 10) 00:19:54.368 11.985 - 12.044: 91.9473% ( 6) 00:19:54.368 12.044 - 12.102: 92.1180% ( 18) 00:19:54.368 12.102 - 12.160: 92.2603% ( 15) 00:19:54.368 12.160 - 12.218: 92.4120% ( 16) 00:19:54.368 12.218 - 12.276: 92.5828% ( 18) 00:19:54.368 12.276 - 12.335: 92.7819% ( 21) 00:19:54.368 12.335 - 12.393: 92.9052% ( 13) 00:19:54.368 12.393 - 12.451: 93.1708% ( 28) 00:19:54.368 12.451 - 12.509: 93.3890% ( 23) 00:19:54.368 12.509 - 12.567: 93.5218% ( 14) 00:19:54.368 12.567 - 12.625: 93.7020% ( 19) 00:19:54.368 12.625 - 12.684: 93.9201% ( 23) 00:19:54.368 12.684 - 12.742: 94.0719% ( 16) 00:19:54.368 12.742 - 12.800: 94.2426% ( 18) 00:19:54.368 12.800 - 12.858: 94.5367% ( 31) 00:19:54.368 12.858 - 12.916: 94.6789% ( 15) 00:19:54.368 12.916 - 12.975: 94.8876% ( 22) 00:19:54.368 12.975 - 13.033: 95.0299% ( 15) 00:19:54.368 13.033 - 13.091: 95.2006% ( 18) 00:19:54.368 13.091 - 13.149: 95.3524% ( 16) 00:19:54.369 13.149 - 13.207: 95.5136% ( 17) 00:19:54.369 13.207 - 13.265: 95.6654% ( 16) 00:19:54.369 13.265 - 13.324: 95.8266% ( 17) 00:19:54.369 13.324 - 13.382: 95.9784% ( 16) 00:19:54.369 13.382 - 13.440: 96.1396% ( 17) 00:19:54.369 13.440 - 13.498: 96.3009% ( 17) 00:19:54.369 13.498 - 13.556: 96.4621% ( 17) 00:19:54.369 13.556 - 13.615: 96.5664% ( 11) 00:19:54.369 13.615 - 13.673: 96.6992% ( 14) 00:19:54.369 13.673 - 13.731: 96.8984% ( 21) 00:19:54.369 13.731 - 13.789: 96.9933% ( 10) 00:19:54.369 13.789 - 13.847: 97.1261% ( 14) 00:19:54.369 13.847 - 13.905: 97.3063% ( 19) 00:19:54.369 13.905 - 13.964: 97.4675% ( 17) 00:19:54.369 13.964 - 14.022: 97.6477% ( 19) 00:19:54.369 14.022 - 14.080: 97.7426% ( 10) 00:19:54.369 14.080 - 14.138: 97.8564% ( 12) 00:19:54.369 14.138 - 14.196: 97.9133% ( 6) 00:19:54.369 14.196 - 14.255: 97.9987% ( 9) 00:19:54.369 14.255 - 14.313: 98.0556% ( 6) 00:19:54.369 14.313 - 14.371: 98.1315% ( 8) 00:19:54.369 14.371 - 14.429: 98.2263% ( 10) 00:19:54.369 14.429 - 14.487: 98.3117% ( 9) 00:19:54.369 14.487 - 14.545: 98.3781% ( 7) 00:19:54.369 14.545 - 14.604: 98.4634% ( 9) 00:19:54.369 14.604 - 14.662: 98.5014% ( 4) 00:19:54.369 14.662 - 14.720: 98.5203% ( 2) 00:19:54.369 14.720 - 14.778: 98.5583% ( 4) 00:19:54.369 14.778 - 14.836: 98.5867% ( 3) 00:19:54.369 14.836 - 14.895: 98.6152% ( 3) 00:19:54.369 14.895 - 15.011: 98.7100% ( 10) 00:19:54.369 15.011 - 15.127: 98.7859% ( 8) 00:19:54.369 15.127 - 15.244: 98.8523% ( 7) 00:19:54.369 15.244 - 15.360: 98.9187% ( 7) 00:19:54.369 15.360 - 15.476: 98.9946% ( 8) 00:19:54.369 15.476 - 15.593: 99.0610% ( 7) 00:19:54.369 15.593 - 15.709: 99.1179% ( 6) 00:19:54.369 15.709 - 15.825: 99.1274% ( 1) 00:19:54.369 15.825 - 15.942: 99.1369% ( 1) 00:19:54.369 15.942 - 16.058: 99.1748% ( 4) 00:19:54.369 16.058 - 16.175: 99.2127% ( 4) 00:19:54.369 16.175 - 16.291: 99.2412% ( 3) 00:19:54.369 16.291 - 16.407: 99.2697% ( 3) 00:19:54.369 16.407 - 16.524: 99.2791% ( 1) 00:19:54.369 16.524 - 16.640: 99.2981% ( 2) 00:19:54.369 16.640 - 16.756: 99.3266% ( 3) 00:19:54.369 16.756 - 16.873: 99.3550% ( 3) 00:19:54.369 16.873 - 16.989: 99.3645% ( 1) 00:19:54.369 16.989 - 17.105: 99.4024% ( 4) 00:19:54.369 17.105 - 17.222: 99.4119% ( 1) 00:19:54.369 17.222 - 17.338: 99.4309% ( 2) 00:19:54.369 17.455 - 17.571: 99.4404% ( 1) 00:19:54.369 17.687 - 17.804: 99.4594% ( 2) 00:19:54.369 18.385 - 18.502: 99.4783% ( 2) 00:19:54.369 18.735 - 18.851: 99.4973% ( 2) 00:19:54.369 18.851 - 18.967: 99.5068% ( 1) 00:19:54.369 18.967 - 19.084: 99.5163% ( 1) 00:19:54.369 19.084 - 19.200: 99.5258% ( 1) 00:19:54.369 19.200 - 19.316: 99.5352% ( 1) 00:19:54.369 19.433 - 19.549: 99.5447% ( 1) 00:19:54.369 19.549 - 19.665: 99.5542% ( 1) 00:19:54.369 19.665 - 19.782: 99.5637% ( 1) 00:19:54.369 20.480 - 20.596: 99.5732% ( 1) 00:19:54.369 20.596 - 20.713: 99.5827% ( 1) 00:19:54.369 20.713 - 20.829: 99.5921% ( 1) 00:19:54.369 20.829 - 20.945: 99.6111% ( 2) 00:19:54.369 20.945 - 21.062: 99.6206% ( 1) 00:19:54.369 21.062 - 21.178: 99.6396% ( 2) 00:19:54.369 21.527 - 21.644: 99.6680% ( 3) 00:19:54.369 21.644 - 21.760: 99.6775% ( 1) 00:19:54.369 21.876 - 21.993: 99.6870% ( 1) 00:19:54.369 22.109 - 22.225: 99.7060% ( 2) 00:19:54.369 22.225 - 22.342: 99.7249% ( 2) 00:19:54.369 22.458 - 22.575: 99.7439% ( 2) 00:19:54.369 22.575 - 22.691: 99.7534% ( 1) 00:19:54.369 22.807 - 22.924: 99.7629% ( 1) 00:19:54.369 23.040 - 23.156: 99.7913% ( 3) 00:19:54.369 23.273 - 23.389: 99.8008% ( 1) 00:19:54.369 23.389 - 23.505: 99.8198% ( 2) 00:19:54.369 23.738 - 23.855: 99.8388% ( 2) 00:19:54.369 23.971 - 24.087: 99.8672% ( 3) 00:19:54.369 24.320 - 24.436: 99.8767% ( 1) 00:19:54.369 24.902 - 25.018: 99.8862% ( 1) 00:19:54.369 25.484 - 25.600: 99.8957% ( 1) 00:19:54.369 26.182 - 26.298: 99.9052% ( 1) 00:19:54.369 26.647 - 26.764: 99.9146% ( 1) 00:19:54.369 28.509 - 28.625: 99.9241% ( 1) 00:19:54.369 33.513 - 33.745: 99.9336% ( 1) 00:19:54.369 34.444 - 34.676: 99.9431% ( 1) 00:19:54.369 36.073 - 36.305: 99.9526% ( 1) 00:19:54.369 36.305 - 36.538: 99.9621% ( 1) 00:19:54.369 76.800 - 77.265: 99.9715% ( 1) 00:19:54.369 90.764 - 91.229: 99.9810% ( 1) 00:19:54.369 92.160 - 92.625: 99.9905% ( 1) 00:19:54.369 141.498 - 142.429: 100.0000% ( 1) 00:19:54.369 00:19:54.369 00:19:54.369 real 0m1.679s 00:19:54.369 user 0m1.020s 00:19:54.369 sys 0m0.658s 00:19:54.369 18:33:46 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.369 18:33:46 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:54.369 ************************************ 00:19:54.369 END TEST nvme_overhead 00:19:54.369 ************************************ 00:19:54.369 18:33:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:54.369 18:33:46 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:54.369 18:33:46 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:19:54.369 18:33:46 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.369 18:33:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:54.369 ************************************ 00:19:54.369 START TEST nvme_arbitration 00:19:54.369 ************************************ 00:19:54.369 18:33:46 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:54.936 EAL: TSC is not safe to use in SMP mode 00:19:54.936 EAL: TSC is not invariant 00:19:54.936 [2024-07-15 18:33:47.319464] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:59.112 Initializing NVMe Controllers 00:19:59.112 Attaching to 0000:00:10.0 00:19:59.112 Attached to 0000:00:10.0 00:19:59.112 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:59.112 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:19:59.112 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:19:59.112 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:19:59.112 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:59.112 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:59.112 Initialization complete. Launching workers. 00:19:59.112 Starting thread on core 1 with urgent priority queue 00:19:59.112 Starting thread on core 2 with urgent priority queue 00:19:59.112 Starting thread on core 3 with urgent priority queue 00:19:59.112 Starting thread on core 0 with urgent priority queue 00:19:59.112 QEMU NVMe Ctrl (12340 ) core 0: 5749.33 IO/s 17.39 secs/100000 ios 00:19:59.112 QEMU NVMe Ctrl (12340 ) core 1: 5777.33 IO/s 17.31 secs/100000 ios 00:19:59.112 QEMU NVMe Ctrl (12340 ) core 2: 5954.33 IO/s 16.79 secs/100000 ios 00:19:59.112 QEMU NVMe Ctrl (12340 ) core 3: 5887.00 IO/s 16.99 secs/100000 ios 00:19:59.112 ======================================================== 00:19:59.112 00:19:59.112 00:19:59.112 real 0m4.291s 00:19:59.112 user 0m12.669s 00:19:59.112 sys 0m0.652s 00:19:59.112 18:33:50 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.112 18:33:50 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:59.112 ************************************ 00:19:59.112 END TEST nvme_arbitration 00:19:59.112 ************************************ 00:19:59.112 18:33:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:59.112 18:33:51 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:59.112 18:33:51 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:59.112 18:33:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.112 18:33:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:59.112 ************************************ 00:19:59.112 START TEST nvme_single_aen 00:19:59.112 ************************************ 00:19:59.112 18:33:51 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:59.414 EAL: TSC is not safe to use in SMP mode 00:19:59.414 EAL: TSC is not invariant 00:19:59.414 [2024-07-15 18:33:51.629446] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:59.414 Asynchronous Event Request test 00:19:59.414 Attaching to 0000:00:10.0 00:19:59.414 Attached to 0000:00:10.0 00:19:59.414 Reset controller to setup AER completions for this process 00:19:59.414 Registering asynchronous event callbacks... 00:19:59.414 Getting orig temperature thresholds of all controllers 00:19:59.414 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:59.414 Setting all controllers temperature threshold low to trigger AER 00:19:59.414 Waiting for all controllers temperature threshold to be set lower 00:19:59.414 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:59.414 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:59.414 Waiting for all controllers to trigger AER and reset threshold 00:19:59.414 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:59.414 Cleaning up... 00:19:59.414 00:19:59.414 real 0m0.664s 00:19:59.414 user 0m0.024s 00:19:59.414 sys 0m0.639s 00:19:59.414 18:33:51 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.414 ************************************ 00:19:59.414 END TEST nvme_single_aen 00:19:59.414 ************************************ 00:19:59.414 18:33:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:59.414 18:33:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:59.414 18:33:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:59.414 18:33:51 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:59.414 18:33:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.414 18:33:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:59.414 ************************************ 00:19:59.414 START TEST nvme_doorbell_aers 00:19:59.414 ************************************ 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:59.414 18:33:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:00.347 EAL: TSC is not safe to use in SMP mode 00:20:00.347 EAL: TSC is not invariant 00:20:00.347 [2024-07-15 18:33:52.464027] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:00.347 Executing: test_write_invalid_db 00:20:00.347 Waiting for AER completion... 00:20:00.347 Asynchronous Event received. 00:20:00.347 Error Informaton Log Page received. 00:20:00.347 Success: test_write_invalid_db 00:20:00.347 00:20:00.347 Executing: test_invalid_db_write_overflow_sq 00:20:00.347 Waiting for AER completion... 00:20:00.347 Asynchronous Event received. 00:20:00.347 Error Informaton Log Page received. 00:20:00.347 Success: test_invalid_db_write_overflow_sq 00:20:00.347 00:20:00.347 Executing: test_invalid_db_write_overflow_cq 00:20:00.347 Waiting for AER completion... 00:20:00.347 Asynchronous Event received. 00:20:00.347 Error Informaton Log Page received. 00:20:00.347 Success: test_invalid_db_write_overflow_cq 00:20:00.347 00:20:00.347 00:20:00.347 real 0m0.793s 00:20:00.347 user 0m0.014s 00:20:00.347 sys 0m0.777s 00:20:00.347 18:33:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.347 18:33:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:20:00.347 ************************************ 00:20:00.347 END TEST nvme_doorbell_aers 00:20:00.347 ************************************ 00:20:00.347 18:33:52 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:00.347 18:33:52 nvme -- nvme/nvme.sh@97 -- # uname 00:20:00.347 18:33:52 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:20:00.347 18:33:52 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:00.347 18:33:52 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:00.347 18:33:52 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.347 18:33:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:00.347 ************************************ 00:20:00.347 START TEST bdev_nvme_reset_stuck_adm_cmd 00:20:00.347 ************************************ 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:00.347 * Looking for test storage... 00:20:00.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69086 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69086 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 69086 ']' 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.347 18:33:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:00.603 [2024-07-15 18:33:52.754505] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:20:00.603 [2024-07-15 18:33:52.754733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:01.166 EAL: TSC is not safe to use in SMP mode 00:20:01.166 EAL: TSC is not invariant 00:20:01.166 [2024-07-15 18:33:53.457637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.422 [2024-07-15 18:33:53.581718] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:01.422 [2024-07-15 18:33:53.581809] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:20:01.422 [2024-07-15 18:33:53.581826] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:20:01.422 [2024-07-15 18:33:53.581840] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:20:01.422 [2024-07-15 18:33:53.586851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.422 [2024-07-15 18:33:53.586996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.422 [2024-07-15 18:33:53.586918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.422 [2024-07-15 18:33:53.587011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.680 18:33:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.680 18:33:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:20:01.680 18:33:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:20:01.680 18:33:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.680 18:33:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:01.680 [2024-07-15 18:33:53.958930] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:01.680 nvme0n1 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:01.680 true 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721068434 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69102 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:01.680 18:33:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:20:04.206 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:04.206 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:04.207 [2024-07-15 18:33:56.059131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:20:04.207 [2024-07-15 18:33:56.059352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:04.207 [2024-07-15 18:33:56.059376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:04.207 [2024-07-15 18:33:56.059387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.207 [2024-07-15 18:33:56.060409] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.207 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69102 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69102 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69102 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.4q93YL 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.7mfciw 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69086 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 69086 ']' 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 69086 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps -c -o command 69086 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # tail -1 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:20:04.207 killing process with pid 69086 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69086' 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 69086 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 69086 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:20:04.207 00:20:04.207 real 0m3.877s 00:20:04.207 user 0m12.165s 00:20:04.207 sys 0m0.899s 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.207 18:33:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:04.207 ************************************ 00:20:04.207 END TEST bdev_nvme_reset_stuck_adm_cmd 00:20:04.207 ************************************ 00:20:04.207 18:33:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:04.207 18:33:56 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:20:04.207 18:33:56 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:20:04.207 18:33:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:04.207 18:33:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.207 18:33:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:04.207 ************************************ 00:20:04.207 START TEST nvme_fio 00:20:04.207 ************************************ 00:20:04.207 18:33:56 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:20:04.207 18:33:56 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:04.207 18:33:56 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:20:04.207 18:33:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:20:04.207 18:33:56 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:04.207 18:33:56 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:20:04.207 18:33:56 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:04.207 18:33:56 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:04.207 18:33:56 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:04.207 18:33:56 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:20:04.207 18:33:56 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:20:04.207 18:33:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:20:04.207 18:33:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:20:04.207 18:33:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:04.207 18:33:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:04.207 18:33:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:04.790 EAL: TSC is not safe to use in SMP mode 00:20:04.790 EAL: TSC is not invariant 00:20:04.790 [2024-07-15 18:33:57.128034] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:04.790 18:33:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:04.790 18:33:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:05.357 EAL: TSC is not safe to use in SMP mode 00:20:05.357 EAL: TSC is not invariant 00:20:05.357 [2024-07-15 18:33:57.753787] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:05.617 18:33:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:05.617 18:33:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:05.617 18:33:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:05.617 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:05.617 fio-3.35 00:20:05.617 Starting 1 thread 00:20:06.183 EAL: TSC is not safe to use in SMP mode 00:20:06.183 EAL: TSC is not invariant 00:20:06.183 [2024-07-15 18:33:58.490433] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:08.715 00:20:08.715 test: (groupid=0, jobs=1): err= 0: pid=101529: Mon Jul 15 18:34:00 2024 00:20:08.715 read: IOPS=45.8k, BW=179MiB/s (188MB/s)(358MiB/2001msec) 00:20:08.715 slat (nsec): min=432, max=40634, avg=565.31, stdev=486.12 00:20:08.715 clat (usec): min=260, max=4689, avg=1396.65, stdev=258.67 00:20:08.715 lat (usec): min=261, max=4730, avg=1397.22, stdev=258.69 00:20:08.715 clat percentiles (usec): 00:20:08.715 | 1.00th=[ 537], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1237], 00:20:08.715 | 30.00th=[ 1287], 40.00th=[ 1336], 50.00th=[ 1385], 60.00th=[ 1434], 00:20:08.715 | 70.00th=[ 1483], 80.00th=[ 1532], 90.00th=[ 1631], 95.00th=[ 1778], 00:20:08.715 | 99.00th=[ 2245], 99.50th=[ 2442], 99.90th=[ 3359], 99.95th=[ 4146], 00:20:08.715 | 99.99th=[ 4621] 00:20:08.715 bw ( KiB/s): min=168848, max=189504, per=99.16%, avg=181720.00, stdev=11228.69, samples=3 00:20:08.715 iops : min=42212, max=47376, avg=45430.00, stdev=2807.17, samples=3 00:20:08.715 write: IOPS=45.7k, BW=178MiB/s (187MB/s)(357MiB/2001msec); 0 zone resets 00:20:08.715 slat (nsec): min=456, max=33505, avg=820.14, stdev=1063.55 00:20:08.715 clat (usec): min=300, max=4668, avg=1396.93, stdev=261.26 00:20:08.715 lat (usec): min=310, max=4671, avg=1397.75, stdev=261.28 00:20:08.715 clat percentiles (usec): 00:20:08.715 | 1.00th=[ 529], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1221], 00:20:08.715 | 30.00th=[ 1287], 40.00th=[ 1336], 50.00th=[ 1385], 60.00th=[ 1434], 00:20:08.715 | 70.00th=[ 1483], 80.00th=[ 1532], 90.00th=[ 1631], 95.00th=[ 1778], 00:20:08.715 | 99.00th=[ 2245], 99.50th=[ 2442], 99.90th=[ 3392], 99.95th=[ 4228], 00:20:08.715 | 99.99th=[ 4555] 00:20:08.715 bw ( KiB/s): min=168768, max=189272, per=98.94%, avg=180773.33, stdev=10692.34, samples=3 00:20:08.715 iops : min=42192, max=47318, avg=45193.33, stdev=2673.08, samples=3 00:20:08.715 lat (usec) : 500=0.84%, 750=0.68%, 1000=0.53% 00:20:08.715 lat (msec) : 2=95.78%, 4=2.11%, 10=0.07% 00:20:08.715 cpu : usr=99.90%, sys=0.00%, ctx=23, majf=0, minf=2 00:20:08.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:08.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:08.715 issued rwts: total=91677,91403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:08.715 00:20:08.715 Run status group 0 (all jobs): 00:20:08.715 READ: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=358MiB (376MB), run=2001-2001msec 00:20:08.715 WRITE: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=357MiB (374MB), run=2001-2001msec 00:20:09.651 18:34:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:09.651 18:34:01 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:20:09.651 00:20:09.651 real 0m5.241s 00:20:09.651 user 0m2.607s 00:20:09.651 sys 0m2.545s 00:20:09.651 18:34:01 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.651 ************************************ 00:20:09.651 18:34:01 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:20:09.651 END TEST nvme_fio 00:20:09.651 ************************************ 00:20:09.651 18:34:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:09.651 00:20:09.651 real 0m26.714s 00:20:09.651 user 0m31.430s 00:20:09.651 sys 0m13.304s 00:20:09.651 18:34:01 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.651 18:34:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:09.651 ************************************ 00:20:09.651 END TEST nvme 00:20:09.651 ************************************ 00:20:09.651 18:34:01 -- common/autotest_common.sh@1142 -- # return 0 00:20:09.651 18:34:01 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:20:09.651 18:34:01 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:09.651 18:34:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:09.651 18:34:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.651 18:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:09.651 ************************************ 00:20:09.651 START TEST nvme_scc 00:20:09.651 ************************************ 00:20:09.651 18:34:01 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:09.651 * Looking for test storage... 00:20:09.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:09.651 18:34:01 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:09.651 18:34:01 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.651 18:34:01 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.651 18:34:01 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.651 18:34:01 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:09.651 18:34:01 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:09.651 18:34:01 nvme_scc -- paths/export.sh@4 -- # export PATH 00:20:09.651 18:34:01 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:09.651 18:34:01 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:20:09.651 18:34:01 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.651 18:34:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:20:09.651 18:34:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:20:09.651 18:34:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:20:09.651 00:20:09.651 real 0m0.145s 00:20:09.651 user 0m0.100s 00:20:09.651 sys 0m0.112s 00:20:09.651 18:34:01 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.651 18:34:01 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:09.651 ************************************ 00:20:09.651 END TEST nvme_scc 00:20:09.651 ************************************ 00:20:09.651 18:34:01 -- common/autotest_common.sh@1142 -- # return 0 00:20:09.651 18:34:01 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:20:09.651 18:34:01 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:20:09.651 18:34:01 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:20:09.651 18:34:01 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:20:09.651 18:34:01 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:20:09.651 18:34:01 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:09.651 18:34:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:09.651 18:34:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.651 18:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:09.651 ************************************ 00:20:09.651 START TEST nvme_rpc 00:20:09.651 ************************************ 00:20:09.651 18:34:01 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:09.910 * Looking for test storage... 00:20:09.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:09.910 18:34:02 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.910 18:34:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:20:09.910 18:34:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:20:09.910 18:34:02 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=69340 00:20:09.910 18:34:02 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:20:09.910 18:34:02 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:09.910 18:34:02 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 69340 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 69340 ']' 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.910 18:34:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:09.910 [2024-07-15 18:34:02.209549] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:20:09.910 [2024-07-15 18:34:02.209822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:10.479 EAL: TSC is not safe to use in SMP mode 00:20:10.479 EAL: TSC is not invariant 00:20:10.479 [2024-07-15 18:34:02.815580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:10.737 [2024-07-15 18:34:02.921437] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:10.737 [2024-07-15 18:34:02.921510] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:20:10.737 [2024-07-15 18:34:02.924263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.737 [2024-07-15 18:34:02.924256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.995 18:34:03 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.995 18:34:03 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:20:10.995 18:34:03 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:20:11.255 [2024-07-15 18:34:03.461832] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:11.255 Nvme0n1 00:20:11.255 18:34:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:20:11.255 18:34:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:20:11.521 request: 00:20:11.521 { 00:20:11.521 "bdev_name": "Nvme0n1", 00:20:11.521 "filename": "non_existing_file", 00:20:11.521 "method": "bdev_nvme_apply_firmware", 00:20:11.521 "req_id": 1 00:20:11.521 } 00:20:11.521 Got JSON-RPC error response 00:20:11.521 response: 00:20:11.521 { 00:20:11.521 "code": -32603, 00:20:11.521 "message": "open file failed." 00:20:11.521 } 00:20:11.521 18:34:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:20:11.521 18:34:03 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:20:11.521 18:34:03 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:11.794 18:34:04 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:11.794 18:34:04 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 69340 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 69340 ']' 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 69340 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 69340 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@956 -- # tail -1 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:20:11.794 killing process with pid 69340 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69340' 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@967 -- # kill 69340 00:20:11.794 18:34:04 nvme_rpc -- common/autotest_common.sh@972 -- # wait 69340 00:20:12.052 00:20:12.052 real 0m2.337s 00:20:12.052 user 0m3.983s 00:20:12.052 sys 0m0.891s 00:20:12.052 18:34:04 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.052 18:34:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.052 ************************************ 00:20:12.052 END TEST nvme_rpc 00:20:12.052 ************************************ 00:20:12.052 18:34:04 -- common/autotest_common.sh@1142 -- # return 0 00:20:12.052 18:34:04 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:12.052 18:34:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:12.052 18:34:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.052 18:34:04 -- common/autotest_common.sh@10 -- # set +x 00:20:12.053 ************************************ 00:20:12.053 START TEST nvme_rpc_timeouts 00:20:12.053 ************************************ 00:20:12.053 18:34:04 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:12.312 * Looking for test storage... 00:20:12.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:12.312 18:34:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:12.312 18:34:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_69381 00:20:12.312 18:34:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_69381 00:20:12.312 18:34:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=69409 00:20:12.312 18:34:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:20:12.312 18:34:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 69409 00:20:12.312 18:34:04 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 69409 ']' 00:20:12.312 18:34:04 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.312 18:34:04 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.312 18:34:04 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.312 18:34:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:12.312 18:34:04 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.312 18:34:04 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:12.312 [2024-07-15 18:34:04.516881] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:20:12.312 [2024-07-15 18:34:04.517141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:12.879 EAL: TSC is not safe to use in SMP mode 00:20:12.879 EAL: TSC is not invariant 00:20:12.879 [2024-07-15 18:34:05.132317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:12.879 [2024-07-15 18:34:05.248851] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:12.879 [2024-07-15 18:34:05.248909] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:20:12.879 [2024-07-15 18:34:05.252073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.879 [2024-07-15 18:34:05.252062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.445 18:34:05 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.445 18:34:05 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:20:13.445 18:34:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:20:13.445 Checking default timeout settings: 00:20:13.445 18:34:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:13.703 Making settings changes with rpc: 00:20:13.703 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:20:13.703 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:20:13.961 Check default vs. modified settings: 00:20:13.961 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:20:13.961 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_69381 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_69381 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:20:14.527 Setting action_on_timeout is changed as expected. 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_69381 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_69381 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:20:14.527 Setting timeout_us is changed as expected. 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_69381 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_69381 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:20:14.527 Setting timeout_admin_us is changed as expected. 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_69381 /tmp/settings_modified_69381 00:20:14.527 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 69409 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 69409 ']' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 69409 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps -c -o command 69409 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # tail -1 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:20:14.527 killing process with pid 69409 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69409' 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 69409 00:20:14.527 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 69409 00:20:14.786 RPC TIMEOUT SETTING TEST PASSED. 00:20:14.786 18:34:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:20:14.786 00:20:14.786 real 0m2.624s 00:20:14.786 user 0m4.867s 00:20:14.786 sys 0m0.897s 00:20:14.786 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:14.786 18:34:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:14.786 ************************************ 00:20:14.786 END TEST nvme_rpc_timeouts 00:20:14.786 ************************************ 00:20:14.786 18:34:07 -- common/autotest_common.sh@1142 -- # return 0 00:20:14.786 18:34:07 -- spdk/autotest.sh@243 -- # uname -s 00:20:14.786 18:34:07 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:20:14.786 18:34:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:14.786 18:34:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.786 18:34:07 -- common/autotest_common.sh@10 -- # set +x 00:20:14.786 18:34:07 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:20:14.786 18:34:07 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:20:14.786 18:34:07 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:20:14.786 18:34:07 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:20:14.786 18:34:07 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:20:14.786 18:34:07 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:20:14.786 18:34:07 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:20:14.786 18:34:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.786 18:34:07 -- common/autotest_common.sh@10 -- # set +x 00:20:14.786 18:34:07 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:20:14.786 18:34:07 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:14.786 18:34:07 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:14.786 18:34:07 -- common/autotest_common.sh@10 -- # set +x 00:20:15.354 setup.sh cleanup function not yet supported on FreeBSD 00:20:15.354 18:34:07 -- common/autotest_common.sh@1451 -- # return 0 00:20:15.354 18:34:07 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:20:15.354 18:34:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.354 18:34:07 -- common/autotest_common.sh@10 -- # set +x 00:20:15.354 18:34:07 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:20:15.354 18:34:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.354 18:34:07 -- common/autotest_common.sh@10 -- # set +x 00:20:15.612 18:34:07 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:15.612 18:34:07 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:15.612 18:34:07 -- spdk/autotest.sh@391 -- # hash lcov 00:20:15.612 /home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:20:15.612 18:34:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.612 18:34:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:15.612 18:34:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.612 18:34:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.612 18:34:07 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:15.612 18:34:07 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:15.612 18:34:07 -- paths/export.sh@4 -- $ export PATH 00:20:15.612 18:34:07 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:15.612 18:34:07 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:15.612 18:34:07 -- common/autobuild_common.sh@444 -- $ date +%s 00:20:15.612 18:34:07 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721068447.XXXXXX 00:20:15.612 18:34:07 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721068447.XXXXXX.asHdbxgcDM 00:20:15.612 18:34:07 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:20:15.612 18:34:07 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:20:15.612 18:34:07 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:15.612 18:34:07 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:15.612 18:34:07 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:15.612 18:34:07 -- common/autobuild_common.sh@460 -- $ get_config_params 00:20:15.612 18:34:07 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:20:15.612 18:34:07 -- common/autotest_common.sh@10 -- $ set +x 00:20:15.612 18:34:07 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:20:15.612 18:34:08 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:20:15.612 18:34:08 -- pm/common@17 -- $ local monitor 00:20:15.612 18:34:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:15.612 18:34:08 -- pm/common@25 -- $ sleep 1 00:20:15.612 18:34:08 -- pm/common@21 -- $ date +%s 00:20:15.612 18:34:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721068448 00:20:15.870 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721068448_collect-vmstat.pm.log 00:20:16.804 18:34:09 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:20:16.804 18:34:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:16.804 18:34:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:16.804 18:34:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:16.804 18:34:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:16.804 18:34:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:16.804 18:34:09 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:16.804 18:34:09 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:16.804 18:34:09 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:16.804 18:34:09 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:16.804 18:34:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:16.804 18:34:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:16.804 18:34:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:16.804 18:34:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:16.805 18:34:09 -- pm/common@44 -- $ pid=69632 00:20:16.805 18:34:09 -- pm/common@50 -- $ kill -TERM 69632 00:20:16.805 + [[ -n 1231 ]] 00:20:16.805 + sudo kill 1231 00:20:16.815 [Pipeline] } 00:20:16.836 [Pipeline] // timeout 00:20:16.842 [Pipeline] } 00:20:16.864 [Pipeline] // stage 00:20:16.870 [Pipeline] } 00:20:16.889 [Pipeline] // catchError 00:20:16.900 [Pipeline] stage 00:20:16.903 [Pipeline] { (Stop VM) 00:20:16.919 [Pipeline] sh 00:20:17.199 + vagrant halt 00:20:21.386 ==> default: Halting domain... 00:20:43.325 [Pipeline] sh 00:20:43.600 + vagrant destroy -f 00:20:47.790 ==> default: Removing domain... 00:20:47.804 [Pipeline] sh 00:20:48.085 + mv output /var/jenkins/workspace/freebsd-vg-autotest_2/output 00:20:48.093 [Pipeline] } 00:20:48.111 [Pipeline] // stage 00:20:48.116 [Pipeline] } 00:20:48.133 [Pipeline] // dir 00:20:48.138 [Pipeline] } 00:20:48.154 [Pipeline] // wrap 00:20:48.161 [Pipeline] } 00:20:48.177 [Pipeline] // catchError 00:20:48.188 [Pipeline] stage 00:20:48.190 [Pipeline] { (Epilogue) 00:20:48.206 [Pipeline] sh 00:20:48.487 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:48.502 [Pipeline] catchError 00:20:48.504 [Pipeline] { 00:20:48.520 [Pipeline] sh 00:20:48.801 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:48.801 Artifacts sizes are good 00:20:48.810 [Pipeline] } 00:20:48.829 [Pipeline] // catchError 00:20:48.844 [Pipeline] archiveArtifacts 00:20:48.852 Archiving artifacts 00:20:48.895 [Pipeline] cleanWs 00:20:48.907 [WS-CLEANUP] Deleting project workspace... 00:20:48.907 [WS-CLEANUP] Deferred wipeout is used... 00:20:48.913 [WS-CLEANUP] done 00:20:48.915 [Pipeline] } 00:20:48.933 [Pipeline] // stage 00:20:48.937 [Pipeline] } 00:20:48.949 [Pipeline] // node 00:20:48.954 [Pipeline] End of Pipeline 00:20:48.988 Finished: SUCCESS